Feb 12 20:25:55.966642 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 12 20:25:55.966679 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 20:25:55.966701 kernel: efi: EFI v2.70 by EDK II Feb 12 20:25:55.966716 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 12 20:25:55.966730 kernel: ACPI: Early table checksum verification disabled Feb 12 20:25:55.966743 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 12 20:25:55.966759 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 12 20:25:55.966773 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 12 20:25:55.966786 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 12 20:25:55.966799 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 12 20:25:55.966817 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 12 20:25:55.966831 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 12 20:25:55.966844 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 12 20:25:55.966858 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 12 20:25:55.967928 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 12 20:25:55.967971 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 12 20:25:55.967987 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 12 20:25:55.968002 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 12 20:25:55.968017 kernel: printk: bootconsole [uart0] enabled Feb 12 20:25:55.968031 kernel: NUMA: Failed to initialise from firmware Feb 12 20:25:55.968046 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 12 20:25:55.968061 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 12 20:25:55.968075 kernel: Zone ranges: Feb 12 20:25:55.968089 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 12 20:25:55.968104 kernel: DMA32 empty Feb 12 20:25:55.968118 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 12 20:25:55.968136 kernel: Movable zone start for each node Feb 12 20:25:55.968150 kernel: Early memory node ranges Feb 12 20:25:55.968165 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 12 20:25:55.968179 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 12 20:25:55.968193 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 12 20:25:55.968208 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 12 20:25:55.968222 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 12 20:25:55.968236 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 12 20:25:55.968251 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 12 20:25:55.968265 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 12 20:25:55.968279 kernel: psci: probing for conduit method from ACPI. Feb 12 20:25:55.968294 kernel: psci: PSCIv1.0 detected in firmware. Feb 12 20:25:55.968312 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 20:25:55.968327 kernel: psci: Trusted OS migration not required Feb 12 20:25:55.968348 kernel: psci: SMC Calling Convention v1.1 Feb 12 20:25:55.968364 kernel: ACPI: SRAT not present Feb 12 20:25:55.968379 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 20:25:55.968398 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 20:25:55.968414 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 12 20:25:55.968429 kernel: Detected PIPT I-cache on CPU0 Feb 12 20:25:55.968444 kernel: CPU features: detected: GIC system register CPU interface Feb 12 20:25:55.968459 kernel: CPU features: detected: Spectre-v2 Feb 12 20:25:55.968474 kernel: CPU features: detected: Spectre-v3a Feb 12 20:25:55.968489 kernel: CPU features: detected: Spectre-BHB Feb 12 20:25:55.968504 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 20:25:55.968519 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 20:25:55.968533 kernel: CPU features: detected: ARM erratum 1742098 Feb 12 20:25:55.968548 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 12 20:25:55.968567 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 12 20:25:55.968582 kernel: Policy zone: Normal Feb 12 20:25:55.968600 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 20:25:55.968616 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:25:55.968631 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:25:55.968647 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:25:55.968662 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:25:55.968677 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 12 20:25:55.968693 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 12 20:25:55.968708 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:25:55.968727 kernel: trace event string verifier disabled Feb 12 20:25:55.968742 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 20:25:55.968758 kernel: rcu: RCU event tracing is enabled. Feb 12 20:25:55.968773 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:25:55.968789 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 20:25:55.968804 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:25:55.968819 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:25:55.968835 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:25:55.968850 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 20:25:55.968886 kernel: GICv3: 96 SPIs implemented Feb 12 20:25:55.968904 kernel: GICv3: 0 Extended SPIs implemented Feb 12 20:25:55.968919 kernel: GICv3: Distributor has no Range Selector support Feb 12 20:25:55.968940 kernel: Root IRQ handler: gic_handle_irq Feb 12 20:25:55.968955 kernel: GICv3: 16 PPIs implemented Feb 12 20:25:55.968987 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 12 20:25:55.969004 kernel: ACPI: SRAT not present Feb 12 20:25:55.969019 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 12 20:25:55.969034 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 20:25:55.969050 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 12 20:25:55.969065 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 12 20:25:55.969080 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 12 20:25:55.969095 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 12 20:25:55.969110 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 12 20:25:55.969130 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 12 20:25:55.969146 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 12 20:25:55.969161 kernel: Console: colour dummy device 80x25 Feb 12 20:25:55.969177 kernel: printk: console [tty1] enabled Feb 12 20:25:55.969192 kernel: ACPI: Core revision 20210730 Feb 12 20:25:55.969208 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 12 20:25:55.969224 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:25:55.969240 kernel: LSM: Security Framework initializing Feb 12 20:25:55.969255 kernel: SELinux: Initializing. Feb 12 20:25:55.969271 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:25:55.969291 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:25:55.969306 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:25:55.969321 kernel: Platform MSI: ITS@0x10080000 domain created Feb 12 20:25:55.969337 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 12 20:25:55.969352 kernel: Remapping and enabling EFI services. Feb 12 20:25:55.969367 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:25:55.969383 kernel: Detected PIPT I-cache on CPU1 Feb 12 20:25:55.969398 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 12 20:25:55.969414 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 12 20:25:55.969434 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 12 20:25:55.969449 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:25:55.969464 kernel: SMP: Total of 2 processors activated. Feb 12 20:25:55.969480 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 20:25:55.969495 kernel: CPU features: detected: 32-bit EL1 Support Feb 12 20:25:55.969511 kernel: CPU features: detected: CRC32 instructions Feb 12 20:25:55.969526 kernel: CPU: All CPU(s) started at EL1 Feb 12 20:25:55.969541 kernel: alternatives: patching kernel code Feb 12 20:25:55.969556 kernel: devtmpfs: initialized Feb 12 20:25:55.969575 kernel: KASLR disabled due to lack of seed Feb 12 20:25:55.969591 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:25:55.969607 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:25:55.969633 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:25:55.969653 kernel: SMBIOS 3.0.0 present. Feb 12 20:25:55.969669 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 12 20:25:55.969685 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:25:55.969701 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 20:25:55.969717 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 20:25:55.969734 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 20:25:55.969750 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:25:55.969766 kernel: audit: type=2000 audit(0.251:1): state=initialized audit_enabled=0 res=1 Feb 12 20:25:55.969786 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:25:55.969802 kernel: cpuidle: using governor menu Feb 12 20:25:55.969818 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 20:25:55.969834 kernel: ASID allocator initialised with 32768 entries Feb 12 20:25:55.969850 kernel: ACPI: bus type PCI registered Feb 12 20:25:55.969887 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:25:55.969905 kernel: Serial: AMBA PL011 UART driver Feb 12 20:25:55.969922 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:25:55.969938 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 20:25:55.969954 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:25:55.969970 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 20:25:55.969986 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:25:55.970002 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 20:25:55.970018 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:25:55.970039 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:25:55.970055 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:25:55.970071 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:25:55.970087 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:25:55.970104 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:25:55.970120 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:25:55.970136 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:25:55.970152 kernel: ACPI: Interpreter enabled Feb 12 20:25:55.970168 kernel: ACPI: Using GIC for interrupt routing Feb 12 20:25:55.970188 kernel: ACPI: MCFG table detected, 1 entries Feb 12 20:25:55.970204 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 12 20:25:55.970497 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:25:55.970718 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 20:25:55.978303 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 20:25:55.978563 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 12 20:25:55.978796 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 12 20:25:55.978828 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 12 20:25:55.978846 kernel: acpiphp: Slot [1] registered Feb 12 20:25:55.978886 kernel: acpiphp: Slot [2] registered Feb 12 20:25:55.978906 kernel: acpiphp: Slot [3] registered Feb 12 20:25:55.978924 kernel: acpiphp: Slot [4] registered Feb 12 20:25:55.978940 kernel: acpiphp: Slot [5] registered Feb 12 20:25:55.978957 kernel: acpiphp: Slot [6] registered Feb 12 20:25:55.978975 kernel: acpiphp: Slot [7] registered Feb 12 20:25:55.978991 kernel: acpiphp: Slot [8] registered Feb 12 20:25:55.979012 kernel: acpiphp: Slot [9] registered Feb 12 20:25:55.979028 kernel: acpiphp: Slot [10] registered Feb 12 20:25:55.979044 kernel: acpiphp: Slot [11] registered Feb 12 20:25:55.979060 kernel: acpiphp: Slot [12] registered Feb 12 20:25:55.979077 kernel: acpiphp: Slot [13] registered Feb 12 20:25:55.979093 kernel: acpiphp: Slot [14] registered Feb 12 20:25:55.979109 kernel: acpiphp: Slot [15] registered Feb 12 20:25:55.979125 kernel: acpiphp: Slot [16] registered Feb 12 20:25:55.979141 kernel: acpiphp: Slot [17] registered Feb 12 20:25:55.979157 kernel: acpiphp: Slot [18] registered Feb 12 20:25:55.979177 kernel: acpiphp: Slot [19] registered Feb 12 20:25:55.979193 kernel: acpiphp: Slot [20] registered Feb 12 20:25:55.979209 kernel: acpiphp: Slot [21] registered Feb 12 20:25:55.979225 kernel: acpiphp: Slot [22] registered Feb 12 20:25:55.979241 kernel: acpiphp: Slot [23] registered Feb 12 20:25:55.979257 kernel: acpiphp: Slot [24] registered Feb 12 20:25:55.979272 kernel: acpiphp: Slot [25] registered Feb 12 20:25:55.979288 kernel: acpiphp: Slot [26] registered Feb 12 20:25:55.979304 kernel: acpiphp: Slot [27] registered Feb 12 20:25:55.979324 kernel: acpiphp: Slot [28] registered Feb 12 20:25:55.979340 kernel: acpiphp: Slot [29] registered Feb 12 20:25:55.979356 kernel: acpiphp: Slot [30] registered Feb 12 20:25:55.979372 kernel: acpiphp: Slot [31] registered Feb 12 20:25:55.979388 kernel: PCI host bridge to bus 0000:00 Feb 12 20:25:55.979653 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 12 20:25:55.979890 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 20:25:55.980112 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 12 20:25:55.980330 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 12 20:25:55.980581 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 12 20:25:55.980827 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 12 20:25:56.007154 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 12 20:25:56.007395 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 12 20:25:56.007615 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 12 20:25:56.007843 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 20:25:56.008108 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 12 20:25:56.008313 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 12 20:25:56.008512 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 12 20:25:56.008723 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 12 20:25:56.008946 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 20:25:56.009173 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 12 20:25:56.009387 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 12 20:25:56.009598 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 12 20:25:56.009795 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 12 20:25:56.010031 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 12 20:25:56.010225 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 12 20:25:56.010407 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 20:25:56.010583 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 12 20:25:56.010611 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 20:25:56.010629 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 20:25:56.010646 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 20:25:56.010662 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 20:25:56.010678 kernel: iommu: Default domain type: Translated Feb 12 20:25:56.010695 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 20:25:56.010711 kernel: vgaarb: loaded Feb 12 20:25:56.010727 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:25:56.010743 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:25:56.010764 kernel: PTP clock support registered Feb 12 20:25:56.010780 kernel: Registered efivars operations Feb 12 20:25:56.010796 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 20:25:56.010813 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:25:56.010829 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:25:56.010846 kernel: pnp: PnP ACPI init Feb 12 20:25:56.015155 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 12 20:25:56.015195 kernel: pnp: PnP ACPI: found 1 devices Feb 12 20:25:56.015213 kernel: NET: Registered PF_INET protocol family Feb 12 20:25:56.015239 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:25:56.015256 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:25:56.015273 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:25:56.015289 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:25:56.015305 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:25:56.015322 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:25:56.015338 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:25:56.015354 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:25:56.015371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:25:56.015391 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:25:56.015407 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 12 20:25:56.015423 kernel: kvm [1]: HYP mode not available Feb 12 20:25:56.015440 kernel: Initialise system trusted keyrings Feb 12 20:25:56.015456 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:25:56.015472 kernel: Key type asymmetric registered Feb 12 20:25:56.015488 kernel: Asymmetric key parser 'x509' registered Feb 12 20:25:56.015504 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:25:56.015520 kernel: io scheduler mq-deadline registered Feb 12 20:25:56.015541 kernel: io scheduler kyber registered Feb 12 20:25:56.015557 kernel: io scheduler bfq registered Feb 12 20:25:56.015773 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 12 20:25:56.015800 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 20:25:56.015816 kernel: ACPI: button: Power Button [PWRB] Feb 12 20:25:56.015833 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:25:56.015850 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 12 20:25:56.016076 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 12 20:25:56.016106 kernel: printk: console [ttyS0] disabled Feb 12 20:25:56.016123 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 12 20:25:56.016140 kernel: printk: console [ttyS0] enabled Feb 12 20:25:56.016156 kernel: printk: bootconsole [uart0] disabled Feb 12 20:25:56.016172 kernel: thunder_xcv, ver 1.0 Feb 12 20:25:56.016188 kernel: thunder_bgx, ver 1.0 Feb 12 20:25:56.016204 kernel: nicpf, ver 1.0 Feb 12 20:25:56.016220 kernel: nicvf, ver 1.0 Feb 12 20:25:56.016428 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 20:25:56.016626 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T20:25:55 UTC (1707769555) Feb 12 20:25:56.016649 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 20:25:56.016666 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:25:56.016682 kernel: Segment Routing with IPv6 Feb 12 20:25:56.016698 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:25:56.016714 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:25:56.016730 kernel: Key type dns_resolver registered Feb 12 20:25:56.016746 kernel: registered taskstats version 1 Feb 12 20:25:56.016767 kernel: Loading compiled-in X.509 certificates Feb 12 20:25:56.016783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 20:25:56.016799 kernel: Key type .fscrypt registered Feb 12 20:25:56.016815 kernel: Key type fscrypt-provisioning registered Feb 12 20:25:56.016831 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:25:56.016847 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:25:56.016880 kernel: ima: No architecture policies found Feb 12 20:25:56.016900 kernel: Freeing unused kernel memory: 34688K Feb 12 20:25:56.016916 kernel: Run /init as init process Feb 12 20:25:56.016937 kernel: with arguments: Feb 12 20:25:56.016953 kernel: /init Feb 12 20:25:56.016985 kernel: with environment: Feb 12 20:25:56.017003 kernel: HOME=/ Feb 12 20:25:56.017020 kernel: TERM=linux Feb 12 20:25:56.017035 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:25:56.017057 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:25:56.017078 systemd[1]: Detected virtualization amazon. Feb 12 20:25:56.017101 systemd[1]: Detected architecture arm64. Feb 12 20:25:56.017118 systemd[1]: Running in initrd. Feb 12 20:25:56.017136 systemd[1]: No hostname configured, using default hostname. Feb 12 20:25:56.017153 systemd[1]: Hostname set to . Feb 12 20:25:56.017171 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:25:56.017188 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:25:56.017206 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:25:56.017223 systemd[1]: Reached target cryptsetup.target. Feb 12 20:25:56.017244 systemd[1]: Reached target paths.target. Feb 12 20:25:56.017261 systemd[1]: Reached target slices.target. Feb 12 20:25:56.017278 systemd[1]: Reached target swap.target. Feb 12 20:25:56.017296 systemd[1]: Reached target timers.target. Feb 12 20:25:56.017314 systemd[1]: Listening on iscsid.socket. Feb 12 20:25:56.017331 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:25:56.017348 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:25:56.017366 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:25:56.017388 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:25:56.017405 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:25:56.017423 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:25:56.017441 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:25:56.017458 systemd[1]: Reached target sockets.target. Feb 12 20:25:56.017476 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:25:56.017493 systemd[1]: Finished network-cleanup.service. Feb 12 20:25:56.017511 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:25:56.017528 systemd[1]: Starting systemd-journald.service... Feb 12 20:25:56.017549 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:25:56.017567 systemd[1]: Starting systemd-resolved.service... Feb 12 20:25:56.017584 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:25:56.017601 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:25:56.017619 kernel: audit: type=1130 audit(1707769555.983:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.017637 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:25:56.017655 kernel: audit: type=1130 audit(1707769555.998:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.017672 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:25:56.017696 systemd-journald[308]: Journal started Feb 12 20:25:56.017784 systemd-journald[308]: Runtime Journal (/run/log/journal/ec230efa47ce6b7484128edd8225c347) is 8.0M, max 75.4M, 67.4M free. Feb 12 20:25:55.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:55.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:55.967010 systemd-modules-load[309]: Inserted module 'overlay' Feb 12 20:25:56.034126 kernel: audit: type=1130 audit(1707769556.011:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.034164 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:25:56.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.043897 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:25:56.055141 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 12 20:25:56.059104 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:25:56.059173 kernel: Bridge firewalling registered Feb 12 20:25:56.074843 systemd[1]: Started systemd-journald.service. Feb 12 20:25:56.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.083316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:25:56.087893 kernel: audit: type=1130 audit(1707769556.074:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.100897 kernel: audit: type=1130 audit(1707769556.090:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.105905 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:25:56.129606 kernel: SCSI subsystem initialized Feb 12 20:25:56.129645 kernel: audit: type=1130 audit(1707769556.109:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.111751 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:25:56.118277 systemd-resolved[310]: Positive Trust Anchors: Feb 12 20:25:56.118292 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:25:56.118345 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:25:56.185731 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:25:56.185806 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:25:56.187345 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:25:56.187417 dracut-cmdline[326]: dracut-dracut-053 Feb 12 20:25:56.201010 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 20:25:56.217897 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 12 20:25:56.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.219330 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:25:56.237401 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:25:56.253915 kernel: audit: type=1130 audit(1707769556.222:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.261827 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:25:56.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.274134 kernel: audit: type=1130 audit(1707769556.264:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.377908 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:25:56.389904 kernel: iscsi: registered transport (tcp) Feb 12 20:25:56.412894 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:25:56.414893 kernel: QLogic iSCSI HBA Driver Feb 12 20:25:56.588902 kernel: random: crng init done Feb 12 20:25:56.588895 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 12 20:25:56.594482 systemd[1]: Started systemd-resolved.service. Feb 12 20:25:56.601841 systemd[1]: Reached target nss-lookup.target. Feb 12 20:25:56.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.616895 kernel: audit: type=1130 audit(1707769556.600:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.618722 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:25:56.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:56.624205 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:25:56.690926 kernel: raid6: neonx8 gen() 6301 MB/s Feb 12 20:25:56.708909 kernel: raid6: neonx8 xor() 4686 MB/s Feb 12 20:25:56.726897 kernel: raid6: neonx4 gen() 6291 MB/s Feb 12 20:25:56.744904 kernel: raid6: neonx4 xor() 4849 MB/s Feb 12 20:25:56.762907 kernel: raid6: neonx2 gen() 5620 MB/s Feb 12 20:25:56.780904 kernel: raid6: neonx2 xor() 4470 MB/s Feb 12 20:25:56.798897 kernel: raid6: neonx1 gen() 4371 MB/s Feb 12 20:25:56.816903 kernel: raid6: neonx1 xor() 3640 MB/s Feb 12 20:25:56.834905 kernel: raid6: int64x8 gen() 3347 MB/s Feb 12 20:25:56.852900 kernel: raid6: int64x8 xor() 2070 MB/s Feb 12 20:25:56.870895 kernel: raid6: int64x4 gen() 3733 MB/s Feb 12 20:25:56.888896 kernel: raid6: int64x4 xor() 2186 MB/s Feb 12 20:25:56.906893 kernel: raid6: int64x2 gen() 3525 MB/s Feb 12 20:25:56.924898 kernel: raid6: int64x2 xor() 1933 MB/s Feb 12 20:25:56.942906 kernel: raid6: int64x1 gen() 2740 MB/s Feb 12 20:25:56.962398 kernel: raid6: int64x1 xor() 1441 MB/s Feb 12 20:25:56.962433 kernel: raid6: using algorithm neonx8 gen() 6301 MB/s Feb 12 20:25:56.962457 kernel: raid6: .... xor() 4686 MB/s, rmw enabled Feb 12 20:25:56.964194 kernel: raid6: using neon recovery algorithm Feb 12 20:25:56.982902 kernel: xor: measuring software checksum speed Feb 12 20:25:56.985904 kernel: 8regs : 9334 MB/sec Feb 12 20:25:56.987895 kernel: 32regs : 11157 MB/sec Feb 12 20:25:56.992062 kernel: arm64_neon : 9664 MB/sec Feb 12 20:25:56.992096 kernel: xor: using function: 32regs (11157 MB/sec) Feb 12 20:25:57.087904 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 20:25:57.105654 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:25:57.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:57.109000 audit: BPF prog-id=7 op=LOAD Feb 12 20:25:57.109000 audit: BPF prog-id=8 op=LOAD Feb 12 20:25:57.112376 systemd[1]: Starting systemd-udevd.service... Feb 12 20:25:57.142616 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 12 20:25:57.154365 systemd[1]: Started systemd-udevd.service. Feb 12 20:25:57.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:57.167833 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:25:57.195497 dracut-pre-trigger[525]: rd.md=0: removing MD RAID activation Feb 12 20:25:57.257702 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:25:57.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:57.263524 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:25:57.369776 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:25:57.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:57.501065 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 20:25:57.501136 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 12 20:25:57.516722 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 12 20:25:57.517110 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 12 20:25:57.529331 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 12 20:25:57.529405 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 12 20:25:57.538895 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:3e:cc:a5:a4:4d Feb 12 20:25:57.539245 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 12 20:25:57.548323 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:25:57.548392 kernel: GPT:9289727 != 16777215 Feb 12 20:25:57.548417 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:25:57.550514 kernel: GPT:9289727 != 16777215 Feb 12 20:25:57.551800 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:25:57.555226 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:25:57.559220 (udev-worker)[572]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:57.634903 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (558) Feb 12 20:25:57.667775 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:25:57.737812 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:25:57.764199 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:25:57.773090 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:25:57.816197 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:25:57.824571 systemd[1]: Starting disk-uuid.service... Feb 12 20:25:57.839854 disk-uuid[671]: Primary Header is updated. Feb 12 20:25:57.839854 disk-uuid[671]: Secondary Entries is updated. Feb 12 20:25:57.839854 disk-uuid[671]: Secondary Header is updated. Feb 12 20:25:57.852488 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:25:57.860892 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:25:58.872418 disk-uuid[672]: The operation has completed successfully. Feb 12 20:25:58.877085 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:25:59.048054 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:25:59.051145 systemd[1]: Finished disk-uuid.service. Feb 12 20:25:59.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.068189 systemd[1]: Starting verity-setup.service... Feb 12 20:25:59.101908 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 20:25:59.179037 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:25:59.186228 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:25:59.192330 systemd[1]: Finished verity-setup.service. Feb 12 20:25:59.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.279892 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:25:59.280545 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:25:59.284052 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:25:59.285330 systemd[1]: Starting ignition-setup.service... Feb 12 20:25:59.296110 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:25:59.319679 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:25:59.319745 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:25:59.319770 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:25:59.329916 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:25:59.347751 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:25:59.384009 systemd[1]: Finished ignition-setup.service. Feb 12 20:25:59.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.392136 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:25:59.467689 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:25:59.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.471000 audit: BPF prog-id=9 op=LOAD Feb 12 20:25:59.474749 systemd[1]: Starting systemd-networkd.service... Feb 12 20:25:59.525610 systemd-networkd[1184]: lo: Link UP Feb 12 20:25:59.525636 systemd-networkd[1184]: lo: Gained carrier Feb 12 20:25:59.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.526765 systemd-networkd[1184]: Enumeration completed Feb 12 20:25:59.527449 systemd-networkd[1184]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:25:59.529438 systemd[1]: Started systemd-networkd.service. Feb 12 20:25:59.532313 systemd[1]: Reached target network.target. Feb 12 20:25:59.536354 systemd-networkd[1184]: eth0: Link UP Feb 12 20:25:59.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.536363 systemd-networkd[1184]: eth0: Gained carrier Feb 12 20:25:59.543775 systemd[1]: Starting iscsiuio.service... Feb 12 20:25:59.559709 systemd-networkd[1184]: eth0: DHCPv4 address 172.31.21.6/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 20:25:59.587350 iscsid[1189]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:25:59.587350 iscsid[1189]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:25:59.587350 iscsid[1189]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:25:59.587350 iscsid[1189]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:25:59.587350 iscsid[1189]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:25:59.587350 iscsid[1189]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:25:59.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.564816 systemd[1]: Started iscsiuio.service. Feb 12 20:25:59.569744 systemd[1]: Starting iscsid.service... Feb 12 20:25:59.582615 systemd[1]: Started iscsid.service. Feb 12 20:25:59.591937 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:25:59.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:59.626691 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:25:59.629782 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:25:59.634335 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:25:59.636544 systemd[1]: Reached target remote-fs.target. Feb 12 20:25:59.639963 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:25:59.662492 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:26:00.054846 ignition[1128]: Ignition 2.14.0 Feb 12 20:26:00.055413 ignition[1128]: Stage: fetch-offline Feb 12 20:26:00.055727 ignition[1128]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:26:00.055787 ignition[1128]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:26:00.077001 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:26:00.080449 ignition[1128]: Ignition finished successfully Feb 12 20:26:00.082197 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:26:00.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.101793 systemd[1]: Starting ignition-fetch.service... Feb 12 20:26:00.114133 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 12 20:26:00.114169 kernel: audit: type=1130 audit(1707769560.084:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.118639 ignition[1208]: Ignition 2.14.0 Feb 12 20:26:00.120637 ignition[1208]: Stage: fetch Feb 12 20:26:00.122531 ignition[1208]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:26:00.125404 ignition[1208]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:26:00.137667 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:26:00.141032 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:26:00.165394 ignition[1208]: INFO : PUT result: OK Feb 12 20:26:00.170131 ignition[1208]: DEBUG : parsed url from cmdline: "" Feb 12 20:26:00.172340 ignition[1208]: INFO : no config URL provided Feb 12 20:26:00.172340 ignition[1208]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:26:00.172340 ignition[1208]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 12 20:26:00.172340 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:26:00.183936 ignition[1208]: INFO : PUT result: OK Feb 12 20:26:00.186097 ignition[1208]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 12 20:26:00.189811 ignition[1208]: INFO : GET result: OK Feb 12 20:26:00.195810 ignition[1208]: DEBUG : parsing config with SHA512: e09a335ca469579b70e3a508a8fbbdd11a3afeb2aaea042145ef35e8d4df34ec7f2c0df4f8b6006c05d6b01f100f6e0f21fbfbcabbca853e566f075bddc5ed46 Feb 12 20:26:00.224306 unknown[1208]: fetched base config from "system" Feb 12 20:26:00.224338 unknown[1208]: fetched base config from "system" Feb 12 20:26:00.224366 unknown[1208]: fetched user config from "aws" Feb 12 20:26:00.231217 ignition[1208]: fetch: fetch complete Feb 12 20:26:00.231244 ignition[1208]: fetch: fetch passed Feb 12 20:26:00.231347 ignition[1208]: Ignition finished successfully Feb 12 20:26:00.238851 systemd[1]: Finished ignition-fetch.service. Feb 12 20:26:00.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.250151 systemd[1]: Starting ignition-kargs.service... Feb 12 20:26:00.255910 kernel: audit: type=1130 audit(1707769560.241:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.268092 ignition[1214]: Ignition 2.14.0 Feb 12 20:26:00.268127 ignition[1214]: Stage: kargs Feb 12 20:26:00.268447 ignition[1214]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:26:00.268509 ignition[1214]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:26:00.283037 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:26:00.285802 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:26:00.289322 ignition[1214]: INFO : PUT result: OK Feb 12 20:26:00.294397 ignition[1214]: kargs: kargs passed Feb 12 20:26:00.294498 ignition[1214]: Ignition finished successfully Feb 12 20:26:00.299458 systemd[1]: Finished ignition-kargs.service. Feb 12 20:26:00.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.306108 systemd[1]: Starting ignition-disks.service... Feb 12 20:26:00.315535 kernel: audit: type=1130 audit(1707769560.301:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.322846 ignition[1220]: Ignition 2.14.0 Feb 12 20:26:00.322892 ignition[1220]: Stage: disks Feb 12 20:26:00.323197 ignition[1220]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:26:00.323255 ignition[1220]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:26:00.338191 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:26:00.341050 ignition[1220]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:26:00.344768 ignition[1220]: INFO : PUT result: OK Feb 12 20:26:00.350844 ignition[1220]: disks: disks passed Feb 12 20:26:00.352924 ignition[1220]: Ignition finished successfully Feb 12 20:26:00.356652 systemd[1]: Finished ignition-disks.service. Feb 12 20:26:00.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.357118 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:26:00.370204 kernel: audit: type=1130 audit(1707769560.355:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.372618 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:26:00.374816 systemd[1]: Reached target local-fs.target. Feb 12 20:26:00.376927 systemd[1]: Reached target sysinit.target. Feb 12 20:26:00.379072 systemd[1]: Reached target basic.target. Feb 12 20:26:00.401306 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:26:00.448788 systemd-fsck[1228]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 20:26:00.455712 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:26:00.467548 kernel: audit: type=1130 audit(1707769560.456:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.459414 systemd[1]: Mounting sysroot.mount... Feb 12 20:26:00.486925 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:26:00.488751 systemd[1]: Mounted sysroot.mount. Feb 12 20:26:00.492692 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:26:00.507538 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:26:00.511458 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:26:00.511596 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:26:00.511664 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:26:00.531419 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:26:00.547021 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:26:00.556530 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:26:00.573911 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1245) Feb 12 20:26:00.580428 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:26:00.580494 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:26:00.580519 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:26:00.589908 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:26:00.593819 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:26:00.615058 systemd-networkd[1184]: eth0: Gained IPv6LL Feb 12 20:26:00.618680 initrd-setup-root[1250]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:26:00.636313 initrd-setup-root[1276]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:26:00.646304 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:26:00.656484 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:26:00.842124 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:26:00.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.853098 systemd[1]: Starting ignition-mount.service... Feb 12 20:26:00.868047 kernel: audit: type=1130 audit(1707769560.846:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.862330 systemd[1]: Starting sysroot-boot.service... Feb 12 20:26:00.878837 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:26:00.879037 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:26:00.916059 systemd[1]: Finished sysroot-boot.service. Feb 12 20:26:00.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.928023 ignition[1311]: INFO : Ignition 2.14.0 Feb 12 20:26:00.928023 ignition[1311]: INFO : Stage: mount Feb 12 20:26:00.928023 ignition[1311]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:26:00.928023 ignition[1311]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:26:00.946428 kernel: audit: type=1130 audit(1707769560.920:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.946468 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:26:00.946468 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:26:00.953372 ignition[1311]: INFO : PUT result: OK Feb 12 20:26:00.965462 ignition[1311]: INFO : mount: mount passed Feb 12 20:26:00.967402 ignition[1311]: INFO : Ignition finished successfully Feb 12 20:26:00.971112 systemd[1]: Finished ignition-mount.service. Feb 12 20:26:00.981823 kernel: audit: type=1130 audit(1707769560.969:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:00.983113 systemd[1]: Starting ignition-files.service... Feb 12 20:26:01.000368 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:26:01.017903 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1320) Feb 12 20:26:01.023418 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:26:01.023459 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:26:01.025615 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:26:01.032899 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:26:01.036915 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:26:01.058436 ignition[1339]: INFO : Ignition 2.14.0 Feb 12 20:26:01.058436 ignition[1339]: INFO : Stage: files Feb 12 20:26:01.062439 ignition[1339]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:26:01.062439 ignition[1339]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:26:01.080927 ignition[1339]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:26:01.084319 ignition[1339]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:26:01.087825 ignition[1339]: INFO : PUT result: OK Feb 12 20:26:01.093457 ignition[1339]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:26:01.098043 ignition[1339]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:26:01.098043 ignition[1339]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:26:01.147634 ignition[1339]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:26:01.151078 ignition[1339]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:26:01.155493 unknown[1339]: wrote ssh authorized keys file for user: core Feb 12 20:26:01.158170 ignition[1339]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:26:01.162380 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 20:26:01.167113 ignition[1339]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 20:26:01.660070 ignition[1339]: INFO : GET result: OK Feb 12 20:26:02.131495 ignition[1339]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 20:26:02.139189 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 20:26:02.139189 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 20:26:02.139189 ignition[1339]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 20:26:02.519346 ignition[1339]: INFO : GET result: OK Feb 12 20:26:02.801673 ignition[1339]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 20:26:02.807364 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 20:26:02.807364 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 20:26:02.816537 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:26:02.826506 ignition[1339]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1458195447" Feb 12 20:26:02.830134 ignition[1339]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1458195447": device or resource busy Feb 12 20:26:02.834208 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1458195447", trying btrfs: device or resource busy Feb 12 20:26:02.834208 ignition[1339]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1458195447" Feb 12 20:26:02.850185 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1342) Feb 12 20:26:02.850223 ignition[1339]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1458195447" Feb 12 20:26:02.863281 ignition[1339]: INFO : op(3): [started] unmounting "/mnt/oem1458195447" Feb 12 20:26:02.868345 systemd[1]: mnt-oem1458195447.mount: Deactivated successfully. Feb 12 20:26:02.872055 ignition[1339]: INFO : op(3): [finished] unmounting "/mnt/oem1458195447" Feb 12 20:26:02.872055 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 20:26:02.872055 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:26:02.883347 ignition[1339]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 20:26:03.017571 ignition[1339]: INFO : GET result: OK Feb 12 20:26:03.621518 ignition[1339]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 20:26:03.628523 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:26:03.628523 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:26:03.628523 ignition[1339]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 20:26:03.702683 ignition[1339]: INFO : GET result: OK Feb 12 20:26:05.118621 ignition[1339]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 20:26:05.128783 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:26:05.128783 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:26:05.128783 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:26:05.128783 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:26:05.128783 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:26:05.128783 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:26:05.128783 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:26:05.128783 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 20:26:05.128783 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:26:05.166756 ignition[1339]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem60767842" Feb 12 20:26:05.170331 ignition[1339]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem60767842": device or resource busy Feb 12 20:26:05.170331 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem60767842", trying btrfs: device or resource busy Feb 12 20:26:05.178673 ignition[1339]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem60767842" Feb 12 20:26:05.184652 ignition[1339]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem60767842" Feb 12 20:26:05.184652 ignition[1339]: INFO : op(6): [started] unmounting "/mnt/oem60767842" Feb 12 20:26:05.184652 ignition[1339]: INFO : op(6): [finished] unmounting "/mnt/oem60767842" Feb 12 20:26:05.184652 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 20:26:05.184652 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 20:26:05.184652 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:26:05.211370 systemd[1]: mnt-oem60767842.mount: Deactivated successfully. Feb 12 20:26:05.233058 ignition[1339]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2405217913" Feb 12 20:26:05.236436 ignition[1339]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2405217913": device or resource busy Feb 12 20:26:05.236436 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2405217913", trying btrfs: device or resource busy Feb 12 20:26:05.236436 ignition[1339]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2405217913" Feb 12 20:26:05.236436 ignition[1339]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2405217913" Feb 12 20:26:05.254060 ignition[1339]: INFO : op(9): [started] unmounting "/mnt/oem2405217913" Feb 12 20:26:05.258012 ignition[1339]: INFO : op(9): [finished] unmounting "/mnt/oem2405217913" Feb 12 20:26:05.258012 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 20:26:05.262654 systemd[1]: mnt-oem2405217913.mount: Deactivated successfully. Feb 12 20:26:05.273231 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 20:26:05.282100 ignition[1339]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:26:05.298635 ignition[1339]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem412923563" Feb 12 20:26:05.302143 ignition[1339]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem412923563": device or resource busy Feb 12 20:26:05.302143 ignition[1339]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem412923563", trying btrfs: device or resource busy Feb 12 20:26:05.302143 ignition[1339]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem412923563" Feb 12 20:26:05.320290 ignition[1339]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem412923563" Feb 12 20:26:05.320290 ignition[1339]: INFO : op(c): [started] unmounting "/mnt/oem412923563" Feb 12 20:26:05.320290 ignition[1339]: INFO : op(c): [finished] unmounting "/mnt/oem412923563" Feb 12 20:26:05.320290 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 20:26:05.320290 ignition[1339]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:26:05.320290 ignition[1339]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:26:05.320290 ignition[1339]: INFO : files: op(f): [started] processing unit "amazon-ssm-agent.service" Feb 12 20:26:05.320290 ignition[1339]: INFO : files: op(f): op(10): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(f): op(10): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(f): [finished] processing unit "amazon-ssm-agent.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(11): [started] processing unit "nvidia.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(11): [finished] processing unit "nvidia.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:26:05.359137 ignition[1339]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:26:05.435833 ignition[1339]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:26:05.435833 ignition[1339]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:26:05.444600 ignition[1339]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:26:05.449348 ignition[1339]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:26:05.453731 ignition[1339]: INFO : files: files passed Feb 12 20:26:05.455856 ignition[1339]: INFO : Ignition finished successfully Feb 12 20:26:05.465039 systemd[1]: Finished ignition-files.service. Feb 12 20:26:05.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.476630 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:26:05.487134 kernel: audit: type=1130 audit(1707769565.467:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.484829 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:26:05.502849 systemd[1]: Starting ignition-quench.service... Feb 12 20:26:05.511321 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:26:05.513813 systemd[1]: Finished ignition-quench.service. Feb 12 20:26:05.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.522111 initrd-setup-root-after-ignition[1364]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:26:05.536328 kernel: audit: type=1130 audit(1707769565.516:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.536370 kernel: audit: type=1131 audit(1707769565.516:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.537065 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:26:05.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.542759 systemd[1]: Reached target ignition-complete.target. Feb 12 20:26:05.552183 kernel: audit: type=1130 audit(1707769565.541:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.555772 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:26:05.586462 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:26:05.589164 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:26:05.593214 systemd[1]: Reached target initrd-fs.target. Feb 12 20:26:05.615160 kernel: audit: type=1130 audit(1707769565.590:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.615201 kernel: audit: type=1131 audit(1707769565.590:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.609632 systemd[1]: Reached target initrd.target. Feb 12 20:26:05.615274 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:26:05.618813 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:26:05.644741 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:26:05.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.656226 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:26:05.663065 kernel: audit: type=1130 audit(1707769565.643:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.678170 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:26:05.682523 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:26:05.687081 systemd[1]: Stopped target timers.target. Feb 12 20:26:05.690994 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:26:05.691344 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:26:05.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.705175 systemd[1]: Stopped target initrd.target. Feb 12 20:26:05.706923 kernel: audit: type=1131 audit(1707769565.695:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.709109 systemd[1]: Stopped target basic.target. Feb 12 20:26:05.713073 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:26:05.717545 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:26:05.721787 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:26:05.726144 systemd[1]: Stopped target remote-fs.target. Feb 12 20:26:05.730075 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:26:05.734258 systemd[1]: Stopped target sysinit.target. Feb 12 20:26:05.738071 systemd[1]: Stopped target local-fs.target. Feb 12 20:26:05.741964 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:26:05.746036 systemd[1]: Stopped target swap.target. Feb 12 20:26:05.749655 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:26:05.752236 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:26:05.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.756383 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:26:05.766940 kernel: audit: type=1131 audit(1707769565.755:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.767161 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:26:05.768672 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:26:05.783524 kernel: audit: type=1131 audit(1707769565.768:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.772072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:26:05.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.772371 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:26:05.783547 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:26:05.784248 systemd[1]: Stopped ignition-files.service. Feb 12 20:26:05.796277 systemd[1]: Stopping ignition-mount.service... Feb 12 20:26:05.807805 iscsid[1189]: iscsid shutting down. Feb 12 20:26:05.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.810105 systemd[1]: Stopping iscsid.service... Feb 12 20:26:05.814690 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:26:05.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.820307 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:26:05.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.820610 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:26:05.825903 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:26:05.855798 ignition[1377]: INFO : Ignition 2.14.0 Feb 12 20:26:05.855798 ignition[1377]: INFO : Stage: umount Feb 12 20:26:05.855798 ignition[1377]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:26:05.855798 ignition[1377]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:26:05.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.826125 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:26:05.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.889583 ignition[1377]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:26:05.889583 ignition[1377]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:26:05.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.831348 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:26:05.899521 ignition[1377]: INFO : PUT result: OK Feb 12 20:26:05.831559 systemd[1]: Stopped iscsid.service. Feb 12 20:26:05.840243 systemd[1]: Stopping iscsiuio.service... Feb 12 20:26:05.869113 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:26:05.869298 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:26:05.882898 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:26:05.883103 systemd[1]: Stopped iscsiuio.service. Feb 12 20:26:05.889507 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:26:05.889779 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:26:05.920482 ignition[1377]: INFO : umount: umount passed Feb 12 20:26:05.922340 ignition[1377]: INFO : Ignition finished successfully Feb 12 20:26:05.930582 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:26:05.930784 systemd[1]: Stopped ignition-mount.service. Feb 12 20:26:05.935221 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:26:05.935315 systemd[1]: Stopped ignition-disks.service. Feb 12 20:26:05.937314 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:26:05.937395 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:26:05.939410 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:26:05.939484 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:26:05.941529 systemd[1]: Stopped target network.target. Feb 12 20:26:05.943349 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:26:05.943444 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:26:05.945785 systemd[1]: Stopped target paths.target. Feb 12 20:26:05.947581 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:26:05.952931 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:26:05.976423 systemd[1]: Stopped target slices.target. Feb 12 20:26:05.978320 systemd[1]: Stopped target sockets.target. Feb 12 20:26:05.980384 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:26:05.980466 systemd[1]: Closed iscsid.socket. Feb 12 20:26:05.982241 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:26:05.982308 systemd[1]: Closed iscsiuio.socket. Feb 12 20:26:05.984052 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:26:05.984145 systemd[1]: Stopped ignition-setup.service. Feb 12 20:26:05.986261 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:26:05.986358 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:26:05.991814 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:26:06.010184 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:26:05.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.025967 systemd-networkd[1184]: eth0: DHCPv6 lease lost Feb 12 20:26:06.029358 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:26:06.029565 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:26:06.036255 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:26:06.036598 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:26:06.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.042897 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:26:06.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.042990 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:26:06.046000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:26:06.046000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:26:06.051449 systemd[1]: Stopping network-cleanup.service... Feb 12 20:26:06.061220 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:26:06.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.061347 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:26:06.065463 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:26:06.065553 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:26:06.067782 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:26:06.067901 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:26:06.072886 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:26:06.086638 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:26:06.086891 systemd[1]: Stopped network-cleanup.service. Feb 12 20:26:06.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.098674 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:26:06.106360 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:26:06.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.110145 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:26:06.110241 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:26:06.116801 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:26:06.116915 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:26:06.123210 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:26:06.123330 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:26:06.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.129634 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:26:06.129768 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:26:06.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.135985 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:26:06.136093 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:26:06.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.144126 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:26:06.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.158000 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:26:06.158126 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:26:06.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:06.162770 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:26:06.162884 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:26:06.167611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:26:06.167700 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:26:06.170502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:26:06.170705 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:26:06.173178 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:26:06.176610 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:26:06.195052 systemd[1]: mnt-oem412923563.mount: Deactivated successfully. Feb 12 20:26:06.195222 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:26:06.246016 systemd-journald[308]: Received SIGTERM from PID 1 (n/a). Feb 12 20:26:06.195337 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:26:06.195443 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:26:06.202558 systemd[1]: Switching root. Feb 12 20:26:06.266438 systemd-journald[308]: Journal stopped Feb 12 20:26:12.172799 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:26:12.175477 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:26:12.175536 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:26:12.175570 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:26:12.175602 kernel: SELinux: policy capability open_perms=1 Feb 12 20:26:12.175641 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:26:12.175672 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:26:12.175703 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:26:12.175732 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:26:12.175763 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:26:12.175797 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:26:12.175830 systemd[1]: Successfully loaded SELinux policy in 114.537ms. Feb 12 20:26:12.175945 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.859ms. Feb 12 20:26:12.175983 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:26:12.176015 systemd[1]: Detected virtualization amazon. Feb 12 20:26:12.176045 systemd[1]: Detected architecture arm64. Feb 12 20:26:12.176075 systemd[1]: Detected first boot. Feb 12 20:26:12.176112 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:26:12.176142 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:26:12.176181 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:26:12.176215 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:26:12.176251 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:26:12.176283 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:26:12.176323 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 12 20:26:12.176351 kernel: audit: type=1334 audit(1707769571.758:89): prog-id=12 op=LOAD Feb 12 20:26:12.176383 kernel: audit: type=1334 audit(1707769571.761:90): prog-id=3 op=UNLOAD Feb 12 20:26:12.176413 kernel: audit: type=1334 audit(1707769571.763:91): prog-id=13 op=LOAD Feb 12 20:26:12.176452 kernel: audit: type=1334 audit(1707769571.763:92): prog-id=14 op=LOAD Feb 12 20:26:12.176482 kernel: audit: type=1334 audit(1707769571.763:93): prog-id=4 op=UNLOAD Feb 12 20:26:12.176509 kernel: audit: type=1334 audit(1707769571.763:94): prog-id=5 op=UNLOAD Feb 12 20:26:12.176538 kernel: audit: type=1334 audit(1707769571.766:95): prog-id=15 op=LOAD Feb 12 20:26:12.176567 kernel: audit: type=1334 audit(1707769571.766:96): prog-id=12 op=UNLOAD Feb 12 20:26:12.176596 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:26:12.176625 kernel: audit: type=1334 audit(1707769571.768:97): prog-id=16 op=LOAD Feb 12 20:26:12.176659 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:26:12.176690 kernel: audit: type=1334 audit(1707769571.771:98): prog-id=17 op=LOAD Feb 12 20:26:12.176719 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:26:12.176748 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:26:12.176780 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:26:12.176817 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:26:12.176850 systemd[1]: Created slice system-getty.slice. Feb 12 20:26:12.176903 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:26:12.176938 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:26:12.176992 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:26:12.177027 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:26:12.177057 systemd[1]: Created slice user.slice. Feb 12 20:26:12.177090 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:26:12.177122 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:26:12.177153 systemd[1]: Set up automount boot.automount. Feb 12 20:26:12.177182 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:26:12.177215 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:26:12.177249 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:26:12.177280 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:26:12.177311 systemd[1]: Reached target integritysetup.target. Feb 12 20:26:12.177340 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:26:12.177372 systemd[1]: Reached target remote-fs.target. Feb 12 20:26:12.179961 systemd[1]: Reached target slices.target. Feb 12 20:26:12.180003 systemd[1]: Reached target swap.target. Feb 12 20:26:12.180037 systemd[1]: Reached target torcx.target. Feb 12 20:26:12.180070 systemd[1]: Reached target veritysetup.target. Feb 12 20:26:12.180111 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:26:12.180143 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:26:12.180175 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:26:12.180206 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:26:12.180237 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:26:12.180266 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:26:12.180298 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:26:12.180327 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:26:12.180356 systemd[1]: Mounting media.mount... Feb 12 20:26:12.180389 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:26:12.180429 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:26:12.180460 systemd[1]: Mounting tmp.mount... Feb 12 20:26:12.180489 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:26:12.180518 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:26:12.180547 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:26:12.180576 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:26:12.180605 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:26:12.180636 systemd[1]: Starting modprobe@drm.service... Feb 12 20:26:12.180669 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:26:12.180698 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:26:12.180727 systemd[1]: Starting modprobe@loop.service... Feb 12 20:26:12.180760 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:26:12.180792 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:26:12.180821 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:26:12.180853 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:26:12.180933 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:26:12.180983 kernel: fuse: init (API version 7.34) Feb 12 20:26:12.181023 systemd[1]: Stopped systemd-journald.service. Feb 12 20:26:12.181055 systemd[1]: Starting systemd-journald.service... Feb 12 20:26:12.181083 kernel: loop: module loaded Feb 12 20:26:12.181114 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:26:12.181146 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:26:12.181176 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:26:12.181207 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:26:12.181240 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:26:12.181270 systemd[1]: Stopped verity-setup.service. Feb 12 20:26:12.181299 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:26:12.181334 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:26:12.181364 systemd[1]: Mounted media.mount. Feb 12 20:26:12.181393 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:26:12.181422 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:26:12.181450 systemd[1]: Mounted tmp.mount. Feb 12 20:26:12.181482 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:26:12.181512 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:26:12.181541 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:26:12.181571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:26:12.181604 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:26:12.181634 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:26:12.181665 systemd-journald[1490]: Journal started Feb 12 20:26:12.181764 systemd-journald[1490]: Runtime Journal (/run/log/journal/ec230efa47ce6b7484128edd8225c347) is 8.0M, max 75.4M, 67.4M free. Feb 12 20:26:07.084000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:26:07.275000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:26:07.275000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:26:07.275000 audit: BPF prog-id=10 op=LOAD Feb 12 20:26:07.275000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:26:07.275000 audit: BPF prog-id=11 op=LOAD Feb 12 20:26:07.275000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:26:07.529000 audit[1412]: AVC avc: denied { associate } for pid=1412 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:26:07.529000 audit[1412]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458ac a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1395 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:07.529000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:26:07.534000 audit[1412]: AVC avc: denied { associate } for pid=1412 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:26:07.534000 audit[1412]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145985 a2=1ed a3=0 items=2 ppid=1395 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:07.534000 audit: CWD cwd="/" Feb 12 20:26:07.534000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:26:07.534000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:26:07.534000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:26:11.758000 audit: BPF prog-id=12 op=LOAD Feb 12 20:26:11.761000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:26:12.186436 systemd[1]: Finished modprobe@drm.service. Feb 12 20:26:11.763000 audit: BPF prog-id=13 op=LOAD Feb 12 20:26:11.763000 audit: BPF prog-id=14 op=LOAD Feb 12 20:26:11.763000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:26:11.763000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:26:12.189782 systemd[1]: Started systemd-journald.service. Feb 12 20:26:11.766000 audit: BPF prog-id=15 op=LOAD Feb 12 20:26:11.766000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:26:11.768000 audit: BPF prog-id=16 op=LOAD Feb 12 20:26:11.771000 audit: BPF prog-id=17 op=LOAD Feb 12 20:26:11.771000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:26:11.771000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:26:11.773000 audit: BPF prog-id=18 op=LOAD Feb 12 20:26:11.773000 audit: BPF prog-id=15 op=UNLOAD Feb 12 20:26:11.776000 audit: BPF prog-id=19 op=LOAD Feb 12 20:26:11.778000 audit: BPF prog-id=20 op=LOAD Feb 12 20:26:11.778000 audit: BPF prog-id=16 op=UNLOAD Feb 12 20:26:11.778000 audit: BPF prog-id=17 op=UNLOAD Feb 12 20:26:11.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:11.792000 audit: BPF prog-id=18 op=UNLOAD Feb 12 20:26:11.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:11.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.070000 audit: BPF prog-id=21 op=LOAD Feb 12 20:26:12.071000 audit: BPF prog-id=22 op=LOAD Feb 12 20:26:12.071000 audit: BPF prog-id=23 op=LOAD Feb 12 20:26:12.071000 audit: BPF prog-id=19 op=UNLOAD Feb 12 20:26:12.071000 audit: BPF prog-id=20 op=UNLOAD Feb 12 20:26:12.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.168000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:26:12.168000 audit[1490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd3dd6170 a2=4000 a3=1 items=0 ppid=1 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:12.168000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:26:12.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:07.518039 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:26:12.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:11.756011 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:26:07.518788 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:26:11.780679 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:26:07.518844 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:26:12.195201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:26:07.518961 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:26:12.195520 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:26:07.518989 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:26:12.198409 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:26:07.519062 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:26:12.198704 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:26:12.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:07.519095 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:26:12.201916 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:26:07.519555 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:26:12.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.202230 systemd[1]: Finished modprobe@loop.service. Feb 12 20:26:07.519652 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:26:12.205119 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:26:07.519690 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:26:12.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.208035 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:26:07.520740 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:26:12.210969 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:26:07.520829 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:26:12.214158 systemd[1]: Reached target network-pre.target. Feb 12 20:26:07.520927 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:26:12.218549 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:26:07.520989 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:26:12.223155 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:26:07.521046 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:26:07.521087 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:07Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:26:10.803797 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:10Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:26:10.804437 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:10Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:26:10.804747 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:10Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:26:12.230673 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:26:10.805377 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:10Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:26:12.233903 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:26:10.805512 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:10Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:26:10.805676 /usr/lib/systemd/system-generators/torcx-generator[1412]: time="2024-02-12T20:26:10Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:26:12.240220 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:26:12.242448 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:26:12.244754 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:26:12.247166 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:26:12.250725 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:26:12.262857 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:26:12.265447 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:26:12.269056 systemd-journald[1490]: Time spent on flushing to /var/log/journal/ec230efa47ce6b7484128edd8225c347 is 53.781ms for 1161 entries. Feb 12 20:26:12.269056 systemd-journald[1490]: System Journal (/var/log/journal/ec230efa47ce6b7484128edd8225c347) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:26:12.346492 systemd-journald[1490]: Received client request to flush runtime journal. Feb 12 20:26:12.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.304444 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:26:12.307121 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:26:12.329972 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:26:12.348231 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:26:12.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.364279 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:26:12.369487 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:26:12.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.452217 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:26:12.458824 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:26:12.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.474504 udevadm[1530]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 20:26:12.539181 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:26:12.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:12.546181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:26:12.670001 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:26:12.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:13.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:13.241000 audit: BPF prog-id=24 op=LOAD Feb 12 20:26:13.241000 audit: BPF prog-id=25 op=LOAD Feb 12 20:26:13.241000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:26:13.242000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:26:13.240215 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:26:13.245097 systemd[1]: Starting systemd-udevd.service... Feb 12 20:26:13.285688 systemd-udevd[1533]: Using default interface naming scheme 'v252'. Feb 12 20:26:13.348781 systemd[1]: Started systemd-udevd.service. Feb 12 20:26:13.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:13.351000 audit: BPF prog-id=26 op=LOAD Feb 12 20:26:13.354276 systemd[1]: Starting systemd-networkd.service... Feb 12 20:26:13.360000 audit: BPF prog-id=27 op=LOAD Feb 12 20:26:13.360000 audit: BPF prog-id=28 op=LOAD Feb 12 20:26:13.361000 audit: BPF prog-id=29 op=LOAD Feb 12 20:26:13.364155 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:26:13.446052 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:26:13.456223 systemd[1]: Started systemd-userdbd.service. Feb 12 20:26:13.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:13.492041 (udev-worker)[1536]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:26:13.643433 systemd-networkd[1539]: lo: Link UP Feb 12 20:26:13.643457 systemd-networkd[1539]: lo: Gained carrier Feb 12 20:26:13.644387 systemd-networkd[1539]: Enumeration completed Feb 12 20:26:13.644565 systemd[1]: Started systemd-networkd.service. Feb 12 20:26:13.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:13.650654 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:26:13.656817 systemd-networkd[1539]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:26:13.663907 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:26:13.663964 systemd-networkd[1539]: eth0: Link UP Feb 12 20:26:13.664293 systemd-networkd[1539]: eth0: Gained carrier Feb 12 20:26:13.688142 systemd-networkd[1539]: eth0: DHCPv4 address 172.31.21.6/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 20:26:13.726896 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1536) Feb 12 20:26:13.834301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:26:13.837648 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:26:13.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:13.842636 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:26:13.883348 lvm[1652]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:26:13.924513 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:26:13.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:13.927507 systemd[1]: Reached target cryptsetup.target. Feb 12 20:26:13.932491 systemd[1]: Starting lvm2-activation.service... Feb 12 20:26:13.942203 lvm[1653]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:26:13.978617 systemd[1]: Finished lvm2-activation.service. Feb 12 20:26:13.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:13.981081 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:26:13.983251 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:26:13.983311 systemd[1]: Reached target local-fs.target. Feb 12 20:26:13.985311 systemd[1]: Reached target machines.target. Feb 12 20:26:13.989562 systemd[1]: Starting ldconfig.service... Feb 12 20:26:13.991734 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:26:13.991914 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:26:13.994229 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:26:13.998544 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:26:14.004537 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:26:14.007212 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:26:14.007342 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:26:14.009624 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:26:14.029181 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1655 (bootctl) Feb 12 20:26:14.031481 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:26:14.054104 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:26:14.056644 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:26:14.059903 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:26:14.078326 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:26:14.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:14.192392 systemd-fsck[1664]: fsck.fat 4.2 (2021-01-31) Feb 12 20:26:14.192392 systemd-fsck[1664]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 12 20:26:14.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:14.193718 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:26:14.199698 systemd[1]: Mounting boot.mount... Feb 12 20:26:14.222399 systemd[1]: Mounted boot.mount. Feb 12 20:26:14.252039 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:26:14.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:14.424511 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:26:14.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:14.430576 systemd[1]: Starting audit-rules.service... Feb 12 20:26:14.435489 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:26:14.447191 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:26:14.450000 audit: BPF prog-id=30 op=LOAD Feb 12 20:26:14.454452 systemd[1]: Starting systemd-resolved.service... Feb 12 20:26:14.457000 audit: BPF prog-id=31 op=LOAD Feb 12 20:26:14.461028 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:26:14.467165 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:26:14.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:14.472073 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:26:14.476985 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:26:14.507000 audit[1684]: SYSTEM_BOOT pid=1684 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:26:14.512555 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:26:14.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:14.552315 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:26:14.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:14.633060 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:26:14.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:14.638000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:26:14.638000 audit[1698]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff80cb3f0 a2=420 a3=0 items=0 ppid=1678 pid=1698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:14.638000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:26:14.641367 augenrules[1698]: No rules Feb 12 20:26:14.636595 systemd[1]: Reached target time-set.target. Feb 12 20:26:14.642093 systemd[1]: Finished audit-rules.service. Feb 12 20:26:14.667010 systemd-resolved[1682]: Positive Trust Anchors: Feb 12 20:26:14.667101 systemd-resolved[1682]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:26:14.667155 systemd-resolved[1682]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:26:14.716173 systemd-resolved[1682]: Defaulting to hostname 'linux'. Feb 12 20:26:14.718724 systemd-timesyncd[1683]: Contacted time server 216.229.4.69:123 (0.flatcar.pool.ntp.org). Feb 12 20:26:14.719483 systemd[1]: Started systemd-resolved.service. Feb 12 20:26:14.719830 systemd-timesyncd[1683]: Initial clock synchronization to Mon 2024-02-12 20:26:14.878867 UTC. Feb 12 20:26:14.722007 systemd[1]: Reached target network.target. Feb 12 20:26:14.724050 systemd[1]: Reached target nss-lookup.target. Feb 12 20:26:15.015105 systemd-networkd[1539]: eth0: Gained IPv6LL Feb 12 20:26:15.020456 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:26:15.023346 systemd[1]: Reached target network-online.target. Feb 12 20:26:15.363760 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:26:15.365029 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:26:15.680897 ldconfig[1654]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:26:15.696361 systemd[1]: Finished ldconfig.service. Feb 12 20:26:15.701017 systemd[1]: Starting systemd-update-done.service... Feb 12 20:26:15.714718 systemd[1]: Finished systemd-update-done.service. Feb 12 20:26:15.717204 systemd[1]: Reached target sysinit.target. Feb 12 20:26:15.719419 systemd[1]: Started motdgen.path. Feb 12 20:26:15.721227 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:26:15.724352 systemd[1]: Started logrotate.timer. Feb 12 20:26:15.726425 systemd[1]: Started mdadm.timer. Feb 12 20:26:15.728074 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:26:15.730280 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:26:15.730342 systemd[1]: Reached target paths.target. Feb 12 20:26:15.732224 systemd[1]: Reached target timers.target. Feb 12 20:26:15.734657 systemd[1]: Listening on dbus.socket. Feb 12 20:26:15.738252 systemd[1]: Starting docker.socket... Feb 12 20:26:15.753480 systemd[1]: Listening on sshd.socket. Feb 12 20:26:15.755747 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:26:15.756792 systemd[1]: Listening on docker.socket. Feb 12 20:26:15.759263 systemd[1]: Reached target sockets.target. Feb 12 20:26:15.761479 systemd[1]: Reached target basic.target. Feb 12 20:26:15.763682 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:26:15.763901 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:26:15.776495 systemd[1]: Started amazon-ssm-agent.service. Feb 12 20:26:15.782707 systemd[1]: Starting containerd.service... Feb 12 20:26:15.787468 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:26:15.792270 systemd[1]: Starting dbus.service... Feb 12 20:26:15.796277 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:26:15.801641 systemd[1]: Starting extend-filesystems.service... Feb 12 20:26:15.803675 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:26:15.806203 systemd[1]: Starting motdgen.service... Feb 12 20:26:15.811768 systemd[1]: Started nvidia.service. Feb 12 20:26:15.821868 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:26:15.829533 systemd[1]: Starting prepare-critools.service... Feb 12 20:26:15.834584 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:26:15.844307 systemd[1]: Starting sshd-keygen.service... Feb 12 20:26:15.859285 systemd[1]: Starting systemd-logind.service... Feb 12 20:26:15.861570 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:26:15.861729 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:26:15.926970 jq[1728]: true Feb 12 20:26:15.862681 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:26:15.980149 jq[1718]: false Feb 12 20:26:15.866251 systemd[1]: Starting update-engine.service... Feb 12 20:26:15.871134 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:26:16.005169 tar[1735]: ./ Feb 12 20:26:16.005169 tar[1735]: ./macvlan Feb 12 20:26:15.949747 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:26:15.950225 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:26:15.996925 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:26:15.997274 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:26:16.049375 tar[1732]: crictl Feb 12 20:26:16.067658 dbus-daemon[1717]: [system] SELinux support is enabled Feb 12 20:26:16.067989 systemd[1]: Started dbus.service. Feb 12 20:26:16.071453 dbus-daemon[1717]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1539 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 12 20:26:16.074979 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:26:16.076894 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 20:26:16.075027 systemd[1]: Reached target system-config.target. Feb 12 20:26:16.079730 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:26:16.079767 systemd[1]: Reached target user-config.target. Feb 12 20:26:16.094557 systemd[1]: Starting systemd-hostnamed.service... Feb 12 20:26:16.110639 jq[1738]: true Feb 12 20:26:16.125348 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:26:16.125702 systemd[1]: Finished motdgen.service. Feb 12 20:26:16.179422 extend-filesystems[1719]: Found nvme0n1 Feb 12 20:26:16.186181 extend-filesystems[1719]: Found nvme0n1p1 Feb 12 20:26:16.191600 extend-filesystems[1719]: Found nvme0n1p2 Feb 12 20:26:16.195416 update_engine[1727]: I0212 20:26:16.193074 1727 main.cc:92] Flatcar Update Engine starting Feb 12 20:26:16.196107 extend-filesystems[1719]: Found nvme0n1p3 Feb 12 20:26:16.203608 extend-filesystems[1719]: Found usr Feb 12 20:26:16.212533 extend-filesystems[1719]: Found nvme0n1p4 Feb 12 20:26:16.212533 extend-filesystems[1719]: Found nvme0n1p6 Feb 12 20:26:16.212533 extend-filesystems[1719]: Found nvme0n1p7 Feb 12 20:26:16.212533 extend-filesystems[1719]: Found nvme0n1p9 Feb 12 20:26:16.212533 extend-filesystems[1719]: Checking size of /dev/nvme0n1p9 Feb 12 20:26:16.208273 systemd[1]: Started update-engine.service. Feb 12 20:26:16.240663 update_engine[1727]: I0212 20:26:16.208346 1727 update_check_scheduler.cc:74] Next update check in 5m54s Feb 12 20:26:16.240822 amazon-ssm-agent[1714]: 2024/02/12 20:26:16 Failed to load instance info from vault. RegistrationKey does not exist. Feb 12 20:26:16.240822 amazon-ssm-agent[1714]: Initializing new seelog logger Feb 12 20:26:16.240822 amazon-ssm-agent[1714]: New Seelog Logger Creation Complete Feb 12 20:26:16.240822 amazon-ssm-agent[1714]: 2024/02/12 20:26:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 20:26:16.240822 amazon-ssm-agent[1714]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 20:26:16.240822 amazon-ssm-agent[1714]: 2024/02/12 20:26:16 processing appconfig overrides Feb 12 20:26:16.222729 systemd[1]: Started locksmithd.service. Feb 12 20:26:16.326719 extend-filesystems[1719]: Resized partition /dev/nvme0n1p9 Feb 12 20:26:16.336445 extend-filesystems[1788]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:26:16.413726 env[1736]: time="2024-02-12T20:26:16.412138590Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:26:16.423039 bash[1789]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:26:16.424947 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:26:16.432931 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 12 20:26:16.483175 systemd-logind[1726]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 20:26:16.486946 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 12 20:26:16.506117 systemd-logind[1726]: New seat seat0. Feb 12 20:26:16.521608 systemd[1]: Started systemd-logind.service. Feb 12 20:26:16.529796 tar[1735]: ./static Feb 12 20:26:16.532336 extend-filesystems[1788]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 12 20:26:16.532336 extend-filesystems[1788]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:26:16.532336 extend-filesystems[1788]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 12 20:26:16.557231 extend-filesystems[1719]: Resized filesystem in /dev/nvme0n1p9 Feb 12 20:26:16.551187 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:26:16.551562 systemd[1]: Finished extend-filesystems.service. Feb 12 20:26:16.592659 env[1736]: time="2024-02-12T20:26:16.592504819Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:26:16.593152 env[1736]: time="2024-02-12T20:26:16.593100516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:26:16.596523 env[1736]: time="2024-02-12T20:26:16.595708888Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:26:16.596523 env[1736]: time="2024-02-12T20:26:16.595802872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:26:16.597106 env[1736]: time="2024-02-12T20:26:16.597023301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:26:16.597106 env[1736]: time="2024-02-12T20:26:16.597092082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:26:16.597287 env[1736]: time="2024-02-12T20:26:16.597130471Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:26:16.597287 env[1736]: time="2024-02-12T20:26:16.597157102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:26:16.597421 env[1736]: time="2024-02-12T20:26:16.597372445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:26:16.598040 env[1736]: time="2024-02-12T20:26:16.597967787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:26:16.630019 env[1736]: time="2024-02-12T20:26:16.629930367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:26:16.630019 env[1736]: time="2024-02-12T20:26:16.630008562Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:26:16.630211 env[1736]: time="2024-02-12T20:26:16.630177993Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:26:16.630312 env[1736]: time="2024-02-12T20:26:16.630206895Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.663058380Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.663133205Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.663166368Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.663247677Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.663386277Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.663423421Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.663454924Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.663964708Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.664021304Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.664057520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.664088229Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.664124982Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.664358518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:26:16.664617 env[1736]: time="2024-02-12T20:26:16.664517998Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:26:16.666250 env[1736]: time="2024-02-12T20:26:16.666207123Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:26:16.666424 env[1736]: time="2024-02-12T20:26:16.666393405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.666686 env[1736]: time="2024-02-12T20:26:16.666654610Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:26:16.666949 env[1736]: time="2024-02-12T20:26:16.666918171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.667287 env[1736]: time="2024-02-12T20:26:16.667242760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.667543 env[1736]: time="2024-02-12T20:26:16.667512573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.667663 env[1736]: time="2024-02-12T20:26:16.667634140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.667961 env[1736]: time="2024-02-12T20:26:16.667931097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.668252 env[1736]: time="2024-02-12T20:26:16.668219995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.668410 env[1736]: time="2024-02-12T20:26:16.668380427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.668644 env[1736]: time="2024-02-12T20:26:16.668613817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.668915 env[1736]: time="2024-02-12T20:26:16.668869820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:26:16.669580 env[1736]: time="2024-02-12T20:26:16.669546703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.669733 env[1736]: time="2024-02-12T20:26:16.669704058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.670004 env[1736]: time="2024-02-12T20:26:16.669971320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.670153 env[1736]: time="2024-02-12T20:26:16.670120824Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:26:16.670283 env[1736]: time="2024-02-12T20:26:16.670249509Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:26:16.670396 env[1736]: time="2024-02-12T20:26:16.670367413Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:26:16.673011 env[1736]: time="2024-02-12T20:26:16.672951828Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:26:16.673267 env[1736]: time="2024-02-12T20:26:16.673235500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:26:16.673765 env[1736]: time="2024-02-12T20:26:16.673661875Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:26:16.674823 env[1736]: time="2024-02-12T20:26:16.674184456Z" level=info msg="Connect containerd service" Feb 12 20:26:16.674823 env[1736]: time="2024-02-12T20:26:16.674260783Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:26:16.675890 env[1736]: time="2024-02-12T20:26:16.675822407Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:26:16.676337 env[1736]: time="2024-02-12T20:26:16.676264521Z" level=info msg="Start subscribing containerd event" Feb 12 20:26:16.676428 env[1736]: time="2024-02-12T20:26:16.676352607Z" level=info msg="Start recovering state" Feb 12 20:26:16.676492 env[1736]: time="2024-02-12T20:26:16.676467531Z" level=info msg="Start event monitor" Feb 12 20:26:16.676551 env[1736]: time="2024-02-12T20:26:16.676505664Z" level=info msg="Start snapshots syncer" Feb 12 20:26:16.676551 env[1736]: time="2024-02-12T20:26:16.676530244Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:26:16.676666 env[1736]: time="2024-02-12T20:26:16.676550659Z" level=info msg="Start streaming server" Feb 12 20:26:16.678758 env[1736]: time="2024-02-12T20:26:16.678701655Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:26:16.679118 env[1736]: time="2024-02-12T20:26:16.679083328Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:26:16.679442 systemd[1]: Started containerd.service. Feb 12 20:26:16.685960 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 20:26:16.686963 env[1736]: time="2024-02-12T20:26:16.686917908Z" level=info msg="containerd successfully booted in 0.343282s" Feb 12 20:26:16.827587 tar[1735]: ./vlan Feb 12 20:26:16.876043 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 12 20:26:16.876387 systemd[1]: Started systemd-hostnamed.service. Feb 12 20:26:16.876968 dbus-daemon[1717]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1758 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 12 20:26:16.886023 systemd[1]: Starting polkit.service... Feb 12 20:26:16.933212 polkitd[1878]: Started polkitd version 121 Feb 12 20:26:16.974324 polkitd[1878]: Loading rules from directory /etc/polkit-1/rules.d Feb 12 20:26:16.974459 polkitd[1878]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 12 20:26:16.988949 polkitd[1878]: Finished loading, compiling and executing 2 rules Feb 12 20:26:16.989731 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 12 20:26:16.990017 systemd[1]: Started polkit.service. Feb 12 20:26:16.990568 polkitd[1878]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 12 20:26:17.052585 systemd-hostnamed[1758]: Hostname set to (transient) Feb 12 20:26:17.052747 systemd-resolved[1682]: System hostname changed to 'ip-172-31-21-6'. Feb 12 20:26:17.054064 tar[1735]: ./portmap Feb 12 20:26:17.220488 coreos-metadata[1716]: Feb 12 20:26:17.220 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 20:26:17.222032 coreos-metadata[1716]: Feb 12 20:26:17.221 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 12 20:26:17.222812 coreos-metadata[1716]: Feb 12 20:26:17.222 INFO Fetch successful Feb 12 20:26:17.223740 coreos-metadata[1716]: Feb 12 20:26:17.223 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 20:26:17.224484 coreos-metadata[1716]: Feb 12 20:26:17.223 INFO Fetch successful Feb 12 20:26:17.227211 unknown[1716]: wrote ssh authorized keys file for user: core Feb 12 20:26:17.263637 tar[1735]: ./host-local Feb 12 20:26:17.263801 update-ssh-keys[1901]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:26:17.264948 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:26:17.274467 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Create new startup processor Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [LongRunningPluginsManager] registered plugins: {} Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Initializing bookkeeping folders Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO removing the completed state files Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Initializing bookkeeping folders for long running plugins Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Initializing healthcheck folders for long running plugins Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Initializing locations for inventory plugin Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Initializing default location for custom inventory Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Initializing default location for file inventory Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Initializing default location for role inventory Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Init the cloudwatchlogs publisher Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform independent plugin aws:softwareInventory Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 12 20:26:17.276108 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform independent plugin aws:runDockerAction Feb 12 20:26:17.277943 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform independent plugin aws:downloadContent Feb 12 20:26:17.279862 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform independent plugin aws:runDocument Feb 12 20:26:17.279862 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 12 20:26:17.280105 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform independent plugin aws:configureDocker Feb 12 20:26:17.280105 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform independent plugin aws:refreshAssociation Feb 12 20:26:17.280105 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform independent plugin aws:configurePackage Feb 12 20:26:17.280105 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Successfully loaded platform dependent plugin aws:runShellScript Feb 12 20:26:17.280105 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 12 20:26:17.280401 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO OS: linux, Arch: arm64 Feb 12 20:26:17.287642 amazon-ssm-agent[1714]: datastore file /var/lib/amazon/ssm/i-01abe41162b290736/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 12 20:26:17.371952 tar[1735]: ./vrf Feb 12 20:26:17.372861 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] Starting document processing engine... Feb 12 20:26:17.440533 tar[1735]: ./bridge Feb 12 20:26:17.467626 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 12 20:26:17.522383 tar[1735]: ./tuning Feb 12 20:26:17.561986 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 12 20:26:17.583343 tar[1735]: ./firewall Feb 12 20:26:17.656544 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] Starting message polling Feb 12 20:26:17.662377 tar[1735]: ./host-device Feb 12 20:26:17.751276 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 12 20:26:17.762130 tar[1735]: ./sbr Feb 12 20:26:17.846284 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [instanceID=i-01abe41162b290736] Starting association polling Feb 12 20:26:17.873935 tar[1735]: ./loopback Feb 12 20:26:17.941348 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 12 20:26:17.966709 tar[1735]: ./dhcp Feb 12 20:26:18.036693 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 12 20:26:18.070615 systemd[1]: Finished prepare-critools.service. Feb 12 20:26:18.132295 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 12 20:26:18.172712 tar[1735]: ./ptp Feb 12 20:26:18.227998 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 12 20:26:18.236378 tar[1735]: ./ipvlan Feb 12 20:26:18.298193 tar[1735]: ./bandwidth Feb 12 20:26:18.323918 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 12 20:26:18.388651 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:26:18.413360 locksmithd[1767]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:26:18.420123 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessageGatewayService] Starting session document processing engine... Feb 12 20:26:18.516564 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 12 20:26:18.613167 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 12 20:26:18.710267 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessageGatewayService] listening reply. Feb 12 20:26:18.807102 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [OfflineService] Starting document processing engine... Feb 12 20:26:18.904206 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [OfflineService] [EngineProcessor] Starting Feb 12 20:26:19.001747 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [OfflineService] [EngineProcessor] Initial processing Feb 12 20:26:19.099283 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [OfflineService] Starting message polling Feb 12 20:26:19.197004 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [OfflineService] Starting send replies to MDS Feb 12 20:26:19.209592 sshd_keygen[1753]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:26:19.245808 systemd[1]: Finished sshd-keygen.service. Feb 12 20:26:19.250861 systemd[1]: Starting issuegen.service... Feb 12 20:26:19.261369 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:26:19.261719 systemd[1]: Finished issuegen.service. Feb 12 20:26:19.267006 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:26:19.280111 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:26:19.285428 systemd[1]: Started getty@tty1.service. Feb 12 20:26:19.290718 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:26:19.293497 systemd[1]: Reached target getty.target. Feb 12 20:26:19.295758 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 12 20:26:19.295920 systemd[1]: Reached target multi-user.target. Feb 12 20:26:19.300729 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:26:19.317148 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:26:19.317530 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:26:19.322692 systemd[1]: Startup finished in 1.153s (kernel) + 11.363s (initrd) + 12.365s (userspace) = 24.882s. Feb 12 20:26:19.393847 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 12 20:26:19.492517 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-01abe41162b290736, requestId: f9c7fa2a-2b46-4d0f-8e93-fe3d1aaee3c6 Feb 12 20:26:19.591098 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 20:26:19.689984 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 12 20:26:19.789229 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [StartupProcessor] Executing startup processor tasks Feb 12 20:26:19.888471 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 12 20:26:19.987644 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 12 20:26:20.087733 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 12 20:26:20.187620 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01abe41162b290736?role=subscribe&stream=input Feb 12 20:26:20.287445 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01abe41162b290736?role=subscribe&stream=input Feb 12 20:26:20.387846 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessageGatewayService] Starting receiving message from control channel Feb 12 20:26:20.488241 amazon-ssm-agent[1714]: 2024-02-12 20:26:17 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 12 20:26:24.172790 systemd[1]: Created slice system-sshd.slice. Feb 12 20:26:24.175971 systemd[1]: Started sshd@0-172.31.21.6:22-147.75.109.163:60998.service. Feb 12 20:26:24.387100 sshd[1929]: Accepted publickey for core from 147.75.109.163 port 60998 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:26:24.391337 sshd[1929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:24.408666 systemd[1]: Created slice user-500.slice. Feb 12 20:26:24.412083 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:26:24.420025 systemd-logind[1726]: New session 1 of user core. Feb 12 20:26:24.430968 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:26:24.433975 systemd[1]: Starting user@500.service... Feb 12 20:26:24.440486 (systemd)[1932]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:24.631981 systemd[1932]: Queued start job for default target default.target. Feb 12 20:26:24.634248 systemd[1932]: Reached target paths.target. Feb 12 20:26:24.634499 systemd[1932]: Reached target sockets.target. Feb 12 20:26:24.634658 systemd[1932]: Reached target timers.target. Feb 12 20:26:24.634805 systemd[1932]: Reached target basic.target. Feb 12 20:26:24.635114 systemd[1]: Started user@500.service. Feb 12 20:26:24.636979 systemd[1]: Started session-1.scope. Feb 12 20:26:24.637289 systemd[1932]: Reached target default.target. Feb 12 20:26:24.637517 systemd[1932]: Startup finished in 185ms. Feb 12 20:26:24.783091 systemd[1]: Started sshd@1-172.31.21.6:22-147.75.109.163:49274.service. Feb 12 20:26:24.960119 sshd[1941]: Accepted publickey for core from 147.75.109.163 port 49274 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:26:24.962672 sshd[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:24.969927 systemd-logind[1726]: New session 2 of user core. Feb 12 20:26:24.971770 systemd[1]: Started session-2.scope. Feb 12 20:26:25.105435 sshd[1941]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:25.111741 systemd[1]: sshd@1-172.31.21.6:22-147.75.109.163:49274.service: Deactivated successfully. Feb 12 20:26:25.113036 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:26:25.113680 systemd-logind[1726]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:26:25.115681 systemd-logind[1726]: Removed session 2. Feb 12 20:26:25.133946 systemd[1]: Started sshd@2-172.31.21.6:22-147.75.109.163:49280.service. Feb 12 20:26:25.308469 sshd[1947]: Accepted publickey for core from 147.75.109.163 port 49280 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:26:25.311628 sshd[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:25.320186 systemd[1]: Started session-3.scope. Feb 12 20:26:25.321364 systemd-logind[1726]: New session 3 of user core. Feb 12 20:26:25.445067 sshd[1947]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:25.451140 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:26:25.452784 systemd[1]: sshd@2-172.31.21.6:22-147.75.109.163:49280.service: Deactivated successfully. Feb 12 20:26:25.453120 systemd-logind[1726]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:26:25.455546 systemd-logind[1726]: Removed session 3. Feb 12 20:26:25.474327 systemd[1]: Started sshd@3-172.31.21.6:22-147.75.109.163:49282.service. Feb 12 20:26:25.653700 sshd[1953]: Accepted publickey for core from 147.75.109.163 port 49282 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:26:25.656621 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:25.665039 systemd[1]: Started session-4.scope. Feb 12 20:26:25.666986 systemd-logind[1726]: New session 4 of user core. Feb 12 20:26:25.800184 sshd[1953]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:25.804996 systemd-logind[1726]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:26:25.805190 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:26:25.806906 systemd[1]: sshd@3-172.31.21.6:22-147.75.109.163:49282.service: Deactivated successfully. Feb 12 20:26:25.808745 systemd-logind[1726]: Removed session 4. Feb 12 20:26:25.828468 systemd[1]: Started sshd@4-172.31.21.6:22-147.75.109.163:49296.service. Feb 12 20:26:26.001741 sshd[1959]: Accepted publickey for core from 147.75.109.163 port 49296 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:26:26.004711 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:26.013687 systemd[1]: Started session-5.scope. Feb 12 20:26:26.014608 systemd-logind[1726]: New session 5 of user core. Feb 12 20:26:26.153276 sudo[1962]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:26:26.154382 sudo[1962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:26:26.840938 systemd[1]: Reloading. Feb 12 20:26:26.968026 /usr/lib/systemd/system-generators/torcx-generator[1991]: time="2024-02-12T20:26:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:26:26.968101 /usr/lib/systemd/system-generators/torcx-generator[1991]: time="2024-02-12T20:26:26Z" level=info msg="torcx already run" Feb 12 20:26:27.142425 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:26:27.142466 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:26:27.186971 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:26:27.377713 systemd[1]: Started kubelet.service. Feb 12 20:26:27.401157 systemd[1]: Starting coreos-metadata.service... Feb 12 20:26:27.508455 kubelet[2046]: E0212 20:26:27.508353 2046 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:26:27.513105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:26:27.513456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:26:27.581909 coreos-metadata[2054]: Feb 12 20:26:27.581 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 20:26:27.582770 coreos-metadata[2054]: Feb 12 20:26:27.582 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 12 20:26:27.583585 coreos-metadata[2054]: Feb 12 20:26:27.583 INFO Fetch successful Feb 12 20:26:27.583702 coreos-metadata[2054]: Feb 12 20:26:27.583 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 12 20:26:27.584470 coreos-metadata[2054]: Feb 12 20:26:27.584 INFO Fetch successful Feb 12 20:26:27.584613 coreos-metadata[2054]: Feb 12 20:26:27.584 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 12 20:26:27.585316 coreos-metadata[2054]: Feb 12 20:26:27.585 INFO Fetch successful Feb 12 20:26:27.585432 coreos-metadata[2054]: Feb 12 20:26:27.585 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 12 20:26:27.586131 coreos-metadata[2054]: Feb 12 20:26:27.586 INFO Fetch successful Feb 12 20:26:27.586276 coreos-metadata[2054]: Feb 12 20:26:27.586 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 12 20:26:27.586946 coreos-metadata[2054]: Feb 12 20:26:27.586 INFO Fetch successful Feb 12 20:26:27.587055 coreos-metadata[2054]: Feb 12 20:26:27.587 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 12 20:26:27.587700 coreos-metadata[2054]: Feb 12 20:26:27.587 INFO Fetch successful Feb 12 20:26:27.587807 coreos-metadata[2054]: Feb 12 20:26:27.587 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 12 20:26:27.588475 coreos-metadata[2054]: Feb 12 20:26:27.588 INFO Fetch successful Feb 12 20:26:27.588587 coreos-metadata[2054]: Feb 12 20:26:27.588 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 12 20:26:27.589340 coreos-metadata[2054]: Feb 12 20:26:27.589 INFO Fetch successful Feb 12 20:26:27.604745 systemd[1]: Finished coreos-metadata.service. Feb 12 20:26:29.548054 systemd[1]: Stopped kubelet.service. Feb 12 20:26:29.579176 systemd[1]: Reloading. Feb 12 20:26:29.748420 /usr/lib/systemd/system-generators/torcx-generator[2112]: time="2024-02-12T20:26:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:26:29.748489 /usr/lib/systemd/system-generators/torcx-generator[2112]: time="2024-02-12T20:26:29Z" level=info msg="torcx already run" Feb 12 20:26:29.924389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:26:29.924437 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:26:29.971808 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:26:30.204435 systemd[1]: Started kubelet.service. Feb 12 20:26:30.322309 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:26:30.322907 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:26:30.322907 kubelet[2167]: I0212 20:26:30.322532 2167 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:26:30.325391 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:26:30.325391 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:26:32.480180 kubelet[2167]: I0212 20:26:32.480120 2167 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:26:32.480180 kubelet[2167]: I0212 20:26:32.480169 2167 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:26:32.480965 kubelet[2167]: I0212 20:26:32.480518 2167 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:26:32.484413 kubelet[2167]: I0212 20:26:32.484376 2167 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:26:32.487127 kubelet[2167]: W0212 20:26:32.487075 2167 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 20:26:32.488778 kubelet[2167]: I0212 20:26:32.488735 2167 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:26:32.489351 kubelet[2167]: I0212 20:26:32.489310 2167 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:26:32.489459 kubelet[2167]: I0212 20:26:32.489433 2167 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:26:32.489627 kubelet[2167]: I0212 20:26:32.489472 2167 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:26:32.489627 kubelet[2167]: I0212 20:26:32.489498 2167 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:26:32.489765 kubelet[2167]: I0212 20:26:32.489653 2167 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:26:32.496098 kubelet[2167]: I0212 20:26:32.496065 2167 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:26:32.496314 kubelet[2167]: I0212 20:26:32.496292 2167 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:26:32.496501 kubelet[2167]: I0212 20:26:32.496480 2167 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:26:32.496625 kubelet[2167]: I0212 20:26:32.496605 2167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:26:32.497601 kubelet[2167]: E0212 20:26:32.497551 2167 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:32.497760 kubelet[2167]: E0212 20:26:32.497657 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:32.498551 kubelet[2167]: I0212 20:26:32.498521 2167 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:26:32.499489 kubelet[2167]: W0212 20:26:32.499462 2167 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:26:32.500981 kubelet[2167]: I0212 20:26:32.500947 2167 server.go:1186] "Started kubelet" Feb 12 20:26:32.501370 kubelet[2167]: I0212 20:26:32.501330 2167 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:26:32.502598 kubelet[2167]: I0212 20:26:32.502547 2167 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:26:32.505042 kubelet[2167]: E0212 20:26:32.504987 2167 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:26:32.505042 kubelet[2167]: E0212 20:26:32.505043 2167 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:26:32.509095 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:26:32.509470 kubelet[2167]: I0212 20:26:32.509441 2167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:26:32.517885 kubelet[2167]: E0212 20:26:32.517810 2167 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.6\" not found" Feb 12 20:26:32.518060 kubelet[2167]: I0212 20:26:32.517924 2167 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:26:32.518060 kubelet[2167]: I0212 20:26:32.518024 2167 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:26:32.539300 kubelet[2167]: W0212 20:26:32.539241 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:26:32.539552 kubelet[2167]: E0212 20:26:32.539528 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:26:32.540648 kubelet[2167]: E0212 20:26:32.540482 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b337656f3af16d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 500908397, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 500908397, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.541390 kubelet[2167]: W0212 20:26:32.541357 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:26:32.541583 kubelet[2167]: E0212 20:26:32.541552 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:26:32.541778 kubelet[2167]: E0212 20:26:32.541745 2167 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.21.6" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:26:32.542931 kubelet[2167]: W0212 20:26:32.542831 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.21.6" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:26:32.543200 kubelet[2167]: E0212 20:26:32.543168 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.6" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:26:32.562910 kubelet[2167]: E0212 20:26:32.562725 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b337656f79c688", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 505026184, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 505026184, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.577916 kubelet[2167]: E0212 20:26:32.577760 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b99ce0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.6 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576318688, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576318688, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.579164 kubelet[2167]: I0212 20:26:32.579108 2167 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:26:32.579384 kubelet[2167]: I0212 20:26:32.579361 2167 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:26:32.579601 kubelet[2167]: I0212 20:26:32.579549 2167 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:26:32.581129 kubelet[2167]: E0212 20:26:32.580532 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b9f16d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.6 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576340333, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576340333, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.582974 kubelet[2167]: E0212 20:26:32.582810 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573ba0648", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.6 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576345672, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576345672, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.592195 kubelet[2167]: I0212 20:26:32.592155 2167 policy_none.go:49] "None policy: Start" Feb 12 20:26:32.593802 kubelet[2167]: I0212 20:26:32.593770 2167 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:26:32.594054 kubelet[2167]: I0212 20:26:32.594031 2167 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:26:32.604075 systemd[1]: Created slice kubepods.slice. Feb 12 20:26:32.613602 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:26:32.623161 kubelet[2167]: E0212 20:26:32.622189 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b99ce0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.6 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576318688, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 619748505, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b99ce0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.622750 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:26:32.623543 kubelet[2167]: I0212 20:26:32.619856 2167 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.6" Feb 12 20:26:32.624690 kubelet[2167]: E0212 20:26:32.624575 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b9f16d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.6 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576340333, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 619757235, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b9f16d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.625881 kubelet[2167]: E0212 20:26:32.625817 2167 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.6" Feb 12 20:26:32.626984 kubelet[2167]: E0212 20:26:32.626824 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573ba0648", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.6 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576345672, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 619762466, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573ba0648" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.632631 kubelet[2167]: I0212 20:26:32.632595 2167 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:26:32.635166 kubelet[2167]: I0212 20:26:32.634131 2167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:26:32.636855 kubelet[2167]: E0212 20:26:32.636818 2167 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.21.6\" not found" Feb 12 20:26:32.639726 kubelet[2167]: E0212 20:26:32.639581 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b33765776242c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 637702855, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 637702855, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.743946 kubelet[2167]: E0212 20:26:32.743894 2167 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.21.6" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:26:32.826783 kubelet[2167]: I0212 20:26:32.826737 2167 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.6" Feb 12 20:26:32.828575 kubelet[2167]: E0212 20:26:32.828531 2167 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.6" Feb 12 20:26:32.829043 kubelet[2167]: E0212 20:26:32.828929 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b99ce0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.6 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576318688, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 826686673, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b99ce0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.830727 kubelet[2167]: E0212 20:26:32.830627 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b9f16d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.6 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576340333, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 826694081, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b9f16d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:32.903653 kubelet[2167]: E0212 20:26:32.903522 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573ba0648", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.6 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576345672, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 826698782, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573ba0648" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:33.070665 kubelet[2167]: I0212 20:26:33.070550 2167 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:26:33.145825 kubelet[2167]: E0212 20:26:33.145767 2167 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.21.6" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:26:33.191713 kubelet[2167]: I0212 20:26:33.191660 2167 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:26:33.191713 kubelet[2167]: I0212 20:26:33.191701 2167 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:26:33.191984 kubelet[2167]: I0212 20:26:33.191740 2167 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:26:33.191984 kubelet[2167]: E0212 20:26:33.191831 2167 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:26:33.194209 kubelet[2167]: W0212 20:26:33.194154 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:26:33.194209 kubelet[2167]: E0212 20:26:33.194206 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:26:33.230143 kubelet[2167]: I0212 20:26:33.230088 2167 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.6" Feb 12 20:26:33.231949 kubelet[2167]: E0212 20:26:33.231905 2167 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.6" Feb 12 20:26:33.232258 kubelet[2167]: E0212 20:26:33.232154 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b99ce0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.6 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576318688, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 33, 230040801, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b99ce0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:33.304187 kubelet[2167]: E0212 20:26:33.304031 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b9f16d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.6 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576340333, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 33, 230048326, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b9f16d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:33.497808 kubelet[2167]: E0212 20:26:33.497759 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:33.503603 kubelet[2167]: E0212 20:26:33.503467 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573ba0648", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.6 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576345672, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 33, 230053195, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573ba0648" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:33.684696 kubelet[2167]: W0212 20:26:33.684639 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:26:33.684696 kubelet[2167]: E0212 20:26:33.684697 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:26:33.947857 kubelet[2167]: E0212 20:26:33.947712 2167 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.21.6" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:26:33.964378 kubelet[2167]: W0212 20:26:33.964321 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.21.6" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:26:33.964378 kubelet[2167]: E0212 20:26:33.964377 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.6" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:26:34.019334 kubelet[2167]: W0212 20:26:34.019279 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:26:34.019334 kubelet[2167]: E0212 20:26:34.019330 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:26:34.033621 kubelet[2167]: I0212 20:26:34.033577 2167 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.6" Feb 12 20:26:34.034763 kubelet[2167]: E0212 20:26:34.034727 2167 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.6" Feb 12 20:26:34.035346 kubelet[2167]: E0212 20:26:34.035236 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b99ce0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.6 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576318688, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 34, 33528004, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b99ce0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:34.036712 kubelet[2167]: E0212 20:26:34.036601 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b9f16d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.6 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576340333, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 34, 33537884, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b9f16d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:34.103694 kubelet[2167]: E0212 20:26:34.103573 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573ba0648", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.6 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576345672, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 34, 33543064, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573ba0648" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:34.347002 kubelet[2167]: W0212 20:26:34.346965 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:26:34.347213 kubelet[2167]: E0212 20:26:34.347191 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:26:34.498770 kubelet[2167]: E0212 20:26:34.498696 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:34.602222 amazon-ssm-agent[1714]: 2024-02-12 20:26:34 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 12 20:26:35.499761 kubelet[2167]: E0212 20:26:35.499664 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:35.549835 kubelet[2167]: E0212 20:26:35.549774 2167 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.21.6" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:26:35.636447 kubelet[2167]: I0212 20:26:35.636407 2167 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.6" Feb 12 20:26:35.638224 kubelet[2167]: E0212 20:26:35.638106 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b99ce0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.6 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576318688, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 35, 636359898, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b99ce0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:35.638591 kubelet[2167]: E0212 20:26:35.638557 2167 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.6" Feb 12 20:26:35.639514 kubelet[2167]: E0212 20:26:35.639407 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b9f16d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.6 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576340333, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 35, 636366891, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b9f16d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:35.640972 kubelet[2167]: E0212 20:26:35.640810 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573ba0648", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.6 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576345672, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 35, 636372251, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573ba0648" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:35.735955 kubelet[2167]: W0212 20:26:35.735912 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:26:35.735955 kubelet[2167]: E0212 20:26:35.735961 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:26:36.493395 kubelet[2167]: W0212 20:26:36.493349 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:26:36.493395 kubelet[2167]: E0212 20:26:36.493401 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:26:36.500591 kubelet[2167]: E0212 20:26:36.500546 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:36.913275 kubelet[2167]: W0212 20:26:36.913147 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.21.6" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:26:36.913470 kubelet[2167]: E0212 20:26:36.913447 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.6" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:26:37.365505 kubelet[2167]: W0212 20:26:37.365466 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:26:37.365720 kubelet[2167]: E0212 20:26:37.365698 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:26:37.501548 kubelet[2167]: E0212 20:26:37.501463 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:38.501817 kubelet[2167]: E0212 20:26:38.501741 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:38.751806 kubelet[2167]: E0212 20:26:38.751736 2167 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.21.6" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:26:38.841040 kubelet[2167]: I0212 20:26:38.840843 2167 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.6" Feb 12 20:26:38.843108 kubelet[2167]: E0212 20:26:38.843057 2167 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.6" Feb 12 20:26:38.843411 kubelet[2167]: E0212 20:26:38.843047 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b99ce0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.6 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576318688, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 38, 840743007, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b99ce0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:38.845282 kubelet[2167]: E0212 20:26:38.845128 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573b9f16d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.6 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576340333, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 38, 840787808, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573b9f16d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:38.847189 kubelet[2167]: E0212 20:26:38.847062 2167 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.6.17b3376573ba0648", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.6", UID:"172.31.21.6", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.6 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 32, 576345672, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 38, 840795147, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.6.17b3376573ba0648" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:26:39.503207 kubelet[2167]: E0212 20:26:39.503141 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:40.192219 kubelet[2167]: W0212 20:26:40.192182 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:26:40.192449 kubelet[2167]: E0212 20:26:40.192428 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:26:40.504006 kubelet[2167]: E0212 20:26:40.503939 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:41.505021 kubelet[2167]: E0212 20:26:41.504948 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:41.849221 kubelet[2167]: W0212 20:26:41.849097 2167 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:26:41.849431 kubelet[2167]: E0212 20:26:41.849407 2167 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:26:42.483525 kubelet[2167]: I0212 20:26:42.483477 2167 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 20:26:42.506000 kubelet[2167]: E0212 20:26:42.505933 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:42.637434 kubelet[2167]: E0212 20:26:42.637372 2167 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.21.6\" not found" Feb 12 20:26:42.912813 kubelet[2167]: E0212 20:26:42.911942 2167 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.21.6" not found Feb 12 20:26:43.505276 kubelet[2167]: I0212 20:26:43.505216 2167 apiserver.go:52] "Watching apiserver" Feb 12 20:26:43.506360 kubelet[2167]: E0212 20:26:43.506311 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:43.818966 kubelet[2167]: I0212 20:26:43.818770 2167 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:26:43.884198 kubelet[2167]: I0212 20:26:43.884120 2167 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:26:44.506735 kubelet[2167]: E0212 20:26:44.506649 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:44.545344 kubelet[2167]: E0212 20:26:44.545287 2167 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.21.6" not found Feb 12 20:26:45.160142 kubelet[2167]: E0212 20:26:45.160083 2167 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.21.6\" not found" node="172.31.21.6" Feb 12 20:26:45.244341 kubelet[2167]: I0212 20:26:45.244308 2167 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.6" Feb 12 20:26:45.506952 kubelet[2167]: E0212 20:26:45.506904 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:45.547095 kubelet[2167]: I0212 20:26:45.547016 2167 kubelet_node_status.go:73] "Successfully registered node" node="172.31.21.6" Feb 12 20:26:45.568438 kubelet[2167]: I0212 20:26:45.568375 2167 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 20:26:45.569664 env[1736]: time="2024-02-12T20:26:45.569386951Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:26:45.570308 kubelet[2167]: I0212 20:26:45.569835 2167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 20:26:45.581985 kubelet[2167]: I0212 20:26:45.581928 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:45.587517 kubelet[2167]: I0212 20:26:45.587457 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:45.592650 systemd[1]: Created slice kubepods-besteffort-pod760ce2b2_4056_47d7_bb8f_7a7f68d51409.slice. Feb 12 20:26:45.621077 systemd[1]: Created slice kubepods-burstable-podd14fa5ec_0e1d_45f2_9380_48577bfb7fac.slice. Feb 12 20:26:45.694405 kubelet[2167]: I0212 20:26:45.694362 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/760ce2b2-4056-47d7-bb8f-7a7f68d51409-kube-proxy\") pod \"kube-proxy-g7l5s\" (UID: \"760ce2b2-4056-47d7-bb8f-7a7f68d51409\") " pod="kube-system/kube-proxy-g7l5s" Feb 12 20:26:45.694713 kubelet[2167]: I0212 20:26:45.694685 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/760ce2b2-4056-47d7-bb8f-7a7f68d51409-lib-modules\") pod \"kube-proxy-g7l5s\" (UID: \"760ce2b2-4056-47d7-bb8f-7a7f68d51409\") " pod="kube-system/kube-proxy-g7l5s" Feb 12 20:26:45.694957 kubelet[2167]: I0212 20:26:45.694927 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-hostproc\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.695145 kubelet[2167]: I0212 20:26:45.695122 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-cgroup\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.695310 kubelet[2167]: I0212 20:26:45.695287 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-config-path\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.695475 kubelet[2167]: I0212 20:26:45.695453 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-host-proc-sys-net\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.695682 kubelet[2167]: I0212 20:26:45.695657 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-hubble-tls\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.695893 kubelet[2167]: I0212 20:26:45.695842 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-run\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.696104 kubelet[2167]: I0212 20:26:45.696079 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-lib-modules\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.696760 kubelet[2167]: I0212 20:26:45.696718 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-xtables-lock\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.697055 kubelet[2167]: I0212 20:26:45.697027 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlxmj\" (UniqueName: \"kubernetes.io/projected/760ce2b2-4056-47d7-bb8f-7a7f68d51409-kube-api-access-qlxmj\") pod \"kube-proxy-g7l5s\" (UID: \"760ce2b2-4056-47d7-bb8f-7a7f68d51409\") " pod="kube-system/kube-proxy-g7l5s" Feb 12 20:26:45.697254 kubelet[2167]: I0212 20:26:45.697228 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-bpf-maps\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.697432 kubelet[2167]: I0212 20:26:45.697408 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cni-path\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.697640 kubelet[2167]: I0212 20:26:45.697615 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-etc-cni-netd\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.697811 kubelet[2167]: I0212 20:26:45.697788 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n859h\" (UniqueName: \"kubernetes.io/projected/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-kube-api-access-n859h\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.698063 kubelet[2167]: I0212 20:26:45.698037 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/760ce2b2-4056-47d7-bb8f-7a7f68d51409-xtables-lock\") pod \"kube-proxy-g7l5s\" (UID: \"760ce2b2-4056-47d7-bb8f-7a7f68d51409\") " pod="kube-system/kube-proxy-g7l5s" Feb 12 20:26:45.698243 kubelet[2167]: I0212 20:26:45.698219 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-clustermesh-secrets\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.698415 kubelet[2167]: I0212 20:26:45.698391 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-host-proc-sys-kernel\") pod \"cilium-bqcxl\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " pod="kube-system/cilium-bqcxl" Feb 12 20:26:45.732125 sudo[1962]: pam_unix(sudo:session): session closed for user root Feb 12 20:26:45.756256 sshd[1959]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:45.760996 systemd[1]: sshd@4-172.31.21.6:22-147.75.109.163:49296.service: Deactivated successfully. Feb 12 20:26:45.762355 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:26:45.764748 systemd-logind[1726]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:26:45.766956 systemd-logind[1726]: Removed session 5. Feb 12 20:26:46.507842 kubelet[2167]: E0212 20:26:46.507770 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:46.742417 kubelet[2167]: I0212 20:26:46.742353 2167 request.go:690] Waited for 1.152847175s due to client-side throttling, not priority and fairness, request: GET:https://172.31.18.59:6443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dhubble-server-certs&limit=500&resourceVersion=0 Feb 12 20:26:46.801288 kubelet[2167]: E0212 20:26:46.801160 2167 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:26:46.801583 kubelet[2167]: E0212 20:26:46.801554 2167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-config-path podName:d14fa5ec-0e1d-45f2-9380-48577bfb7fac nodeName:}" failed. No retries permitted until 2024-02-12 20:26:47.301510488 +0000 UTC m=+17.089943034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-config-path") pod "cilium-bqcxl" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:26:47.086284 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 12 20:26:47.435056 env[1736]: time="2024-02-12T20:26:47.433048636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bqcxl,Uid:d14fa5ec-0e1d-45f2-9380-48577bfb7fac,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:47.508588 kubelet[2167]: E0212 20:26:47.508521 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:47.720893 env[1736]: time="2024-02-12T20:26:47.720219965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7l5s,Uid:760ce2b2-4056-47d7-bb8f-7a7f68d51409,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:48.066909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214847574.mount: Deactivated successfully. Feb 12 20:26:48.078199 env[1736]: time="2024-02-12T20:26:48.078131714Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.080732 env[1736]: time="2024-02-12T20:26:48.080680115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.085756 env[1736]: time="2024-02-12T20:26:48.085703551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.088037 env[1736]: time="2024-02-12T20:26:48.087976977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.092848 env[1736]: time="2024-02-12T20:26:48.092797784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.095128 env[1736]: time="2024-02-12T20:26:48.095060623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.099569 env[1736]: time="2024-02-12T20:26:48.099512600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.102046 env[1736]: time="2024-02-12T20:26:48.101997574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:48.197351 env[1736]: time="2024-02-12T20:26:48.197207310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:48.197351 env[1736]: time="2024-02-12T20:26:48.197289402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:48.197697 env[1736]: time="2024-02-12T20:26:48.197318642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:48.198165 env[1736]: time="2024-02-12T20:26:48.198076647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9 pid=2264 runtime=io.containerd.runc.v2 Feb 12 20:26:48.204549 env[1736]: time="2024-02-12T20:26:48.204409718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:48.204706 env[1736]: time="2024-02-12T20:26:48.204570324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:48.204706 env[1736]: time="2024-02-12T20:26:48.204634074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:48.205087 env[1736]: time="2024-02-12T20:26:48.205007609Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9af7cf46df29ff59cd29db014fb9735a84c81a7cd4527a68e15fd3c99c69877 pid=2272 runtime=io.containerd.runc.v2 Feb 12 20:26:48.245103 systemd[1]: Started cri-containerd-6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9.scope. Feb 12 20:26:48.271841 systemd[1]: Started cri-containerd-b9af7cf46df29ff59cd29db014fb9735a84c81a7cd4527a68e15fd3c99c69877.scope. Feb 12 20:26:48.310164 env[1736]: time="2024-02-12T20:26:48.310076964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bqcxl,Uid:d14fa5ec-0e1d-45f2-9380-48577bfb7fac,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\"" Feb 12 20:26:48.313521 env[1736]: time="2024-02-12T20:26:48.313446352Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:26:48.346159 env[1736]: time="2024-02-12T20:26:48.344627439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7l5s,Uid:760ce2b2-4056-47d7-bb8f-7a7f68d51409,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9af7cf46df29ff59cd29db014fb9735a84c81a7cd4527a68e15fd3c99c69877\"" Feb 12 20:26:48.509621 kubelet[2167]: E0212 20:26:48.509565 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:49.510074 kubelet[2167]: E0212 20:26:49.510001 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:50.510237 kubelet[2167]: E0212 20:26:50.510132 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:51.511010 kubelet[2167]: E0212 20:26:51.510943 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:52.497275 kubelet[2167]: E0212 20:26:52.497216 2167 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:52.512604 kubelet[2167]: E0212 20:26:52.512545 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:53.513172 kubelet[2167]: E0212 20:26:53.513115 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:54.514817 kubelet[2167]: E0212 20:26:54.514733 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:55.516272 kubelet[2167]: E0212 20:26:55.516204 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:56.443887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695907721.mount: Deactivated successfully. Feb 12 20:26:56.517653 kubelet[2167]: E0212 20:26:56.517578 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:57.518702 kubelet[2167]: E0212 20:26:57.518640 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:58.519385 kubelet[2167]: E0212 20:26:58.519320 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:59.519581 kubelet[2167]: E0212 20:26:59.519509 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:00.520723 kubelet[2167]: E0212 20:27:00.520643 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:00.594710 env[1736]: time="2024-02-12T20:27:00.594643071Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:00.599404 env[1736]: time="2024-02-12T20:27:00.599342915Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:00.603244 env[1736]: time="2024-02-12T20:27:00.603184968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:00.605094 env[1736]: time="2024-02-12T20:27:00.605028324Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 20:27:00.607675 env[1736]: time="2024-02-12T20:27:00.607613020Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:27:00.611461 env[1736]: time="2024-02-12T20:27:00.611397055Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:27:00.631651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521608662.mount: Deactivated successfully. Feb 12 20:27:00.643519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227287800.mount: Deactivated successfully. Feb 12 20:27:00.655811 env[1736]: time="2024-02-12T20:27:00.655721250Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\"" Feb 12 20:27:00.657196 env[1736]: time="2024-02-12T20:27:00.657044904Z" level=info msg="StartContainer for \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\"" Feb 12 20:27:00.695279 systemd[1]: Started cri-containerd-3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61.scope. Feb 12 20:27:00.758504 env[1736]: time="2024-02-12T20:27:00.758434897Z" level=info msg="StartContainer for \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\" returns successfully" Feb 12 20:27:00.778937 systemd[1]: cri-containerd-3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61.scope: Deactivated successfully. Feb 12 20:27:01.441328 update_engine[1727]: I0212 20:27:01.441258 1727 update_attempter.cc:509] Updating boot flags... Feb 12 20:27:01.521223 kubelet[2167]: E0212 20:27:01.521176 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:01.626920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61-rootfs.mount: Deactivated successfully. Feb 12 20:27:02.522309 kubelet[2167]: E0212 20:27:02.522241 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:03.523372 kubelet[2167]: E0212 20:27:03.523313 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:04.523587 kubelet[2167]: E0212 20:27:04.523510 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:04.629075 amazon-ssm-agent[1714]: 2024-02-12 20:27:04 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 12 20:27:05.523810 kubelet[2167]: E0212 20:27:05.523747 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:06.524819 kubelet[2167]: E0212 20:27:06.524751 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:06.587016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836978714.mount: Deactivated successfully. Feb 12 20:27:06.728750 env[1736]: time="2024-02-12T20:27:06.728685606Z" level=info msg="shim disconnected" id=3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61 Feb 12 20:27:06.729722 env[1736]: time="2024-02-12T20:27:06.729665240Z" level=warning msg="cleaning up after shim disconnected" id=3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61 namespace=k8s.io Feb 12 20:27:06.729829 env[1736]: time="2024-02-12T20:27:06.729717554Z" level=info msg="cleaning up dead shim" Feb 12 20:27:06.751701 env[1736]: time="2024-02-12T20:27:06.751621773Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2654 runtime=io.containerd.runc.v2\n" Feb 12 20:27:07.299727 env[1736]: time="2024-02-12T20:27:07.299642582Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:27:07.325457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760334347.mount: Deactivated successfully. Feb 12 20:27:07.336641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208558057.mount: Deactivated successfully. Feb 12 20:27:07.344058 env[1736]: time="2024-02-12T20:27:07.343975272Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\"" Feb 12 20:27:07.345012 env[1736]: time="2024-02-12T20:27:07.344923506Z" level=info msg="StartContainer for \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\"" Feb 12 20:27:07.389303 env[1736]: time="2024-02-12T20:27:07.389222916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:07.396472 env[1736]: time="2024-02-12T20:27:07.396386343Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:07.400183 env[1736]: time="2024-02-12T20:27:07.399322904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:07.402788 systemd[1]: Started cri-containerd-3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43.scope. Feb 12 20:27:07.414808 env[1736]: time="2024-02-12T20:27:07.414716337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:07.415728 env[1736]: time="2024-02-12T20:27:07.415632396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 20:27:07.424546 env[1736]: time="2024-02-12T20:27:07.424491510Z" level=info msg="CreateContainer within sandbox \"b9af7cf46df29ff59cd29db014fb9735a84c81a7cd4527a68e15fd3c99c69877\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:27:07.501383 env[1736]: time="2024-02-12T20:27:07.501317063Z" level=info msg="StartContainer for \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\" returns successfully" Feb 12 20:27:07.513399 env[1736]: time="2024-02-12T20:27:07.513318955Z" level=info msg="CreateContainer within sandbox \"b9af7cf46df29ff59cd29db014fb9735a84c81a7cd4527a68e15fd3c99c69877\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bcc12ef0cb252391b9bc066d58b4cb3b9cf8a167b8975b5acb511bf1d6086d72\"" Feb 12 20:27:07.516399 env[1736]: time="2024-02-12T20:27:07.516328940Z" level=info msg="StartContainer for \"bcc12ef0cb252391b9bc066d58b4cb3b9cf8a167b8975b5acb511bf1d6086d72\"" Feb 12 20:27:07.525773 kubelet[2167]: E0212 20:27:07.525622 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:07.526492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:27:07.527257 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:27:07.527580 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:27:07.531321 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:27:07.541714 systemd[1]: cri-containerd-3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43.scope: Deactivated successfully. Feb 12 20:27:07.557333 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:27:07.595326 systemd[1]: Started cri-containerd-bcc12ef0cb252391b9bc066d58b4cb3b9cf8a167b8975b5acb511bf1d6086d72.scope. Feb 12 20:27:07.762464 env[1736]: time="2024-02-12T20:27:07.762391185Z" level=info msg="StartContainer for \"bcc12ef0cb252391b9bc066d58b4cb3b9cf8a167b8975b5acb511bf1d6086d72\" returns successfully" Feb 12 20:27:08.068999 env[1736]: time="2024-02-12T20:27:08.068912362Z" level=info msg="shim disconnected" id=3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43 Feb 12 20:27:08.068999 env[1736]: time="2024-02-12T20:27:08.068994234Z" level=warning msg="cleaning up after shim disconnected" id=3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43 namespace=k8s.io Feb 12 20:27:08.069421 env[1736]: time="2024-02-12T20:27:08.069018261Z" level=info msg="cleaning up dead shim" Feb 12 20:27:08.089938 env[1736]: time="2024-02-12T20:27:08.088848132Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2828 runtime=io.containerd.runc.v2\n" Feb 12 20:27:08.309489 env[1736]: time="2024-02-12T20:27:08.309418300Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:27:08.315735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43-rootfs.mount: Deactivated successfully. Feb 12 20:27:08.348488 kubelet[2167]: I0212 20:27:08.348339 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-g7l5s" podStartSLOduration=-9.223372013506508e+09 pod.CreationTimestamp="2024-02-12 20:26:45 +0000 UTC" firstStartedPulling="2024-02-12 20:26:48.348339273 +0000 UTC m=+18.136771819" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:08.320118639 +0000 UTC m=+38.108551233" watchObservedRunningTime="2024-02-12 20:27:08.348267288 +0000 UTC m=+38.136699846" Feb 12 20:27:08.354636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654113929.mount: Deactivated successfully. Feb 12 20:27:08.361469 env[1736]: time="2024-02-12T20:27:08.361396269Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\"" Feb 12 20:27:08.362794 env[1736]: time="2024-02-12T20:27:08.362732259Z" level=info msg="StartContainer for \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\"" Feb 12 20:27:08.398647 systemd[1]: Started cri-containerd-57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17.scope. Feb 12 20:27:08.479393 env[1736]: time="2024-02-12T20:27:08.479325655Z" level=info msg="StartContainer for \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\" returns successfully" Feb 12 20:27:08.484310 systemd[1]: cri-containerd-57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17.scope: Deactivated successfully. Feb 12 20:27:08.526171 kubelet[2167]: E0212 20:27:08.526075 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:08.571579 env[1736]: time="2024-02-12T20:27:08.571502402Z" level=info msg="shim disconnected" id=57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17 Feb 12 20:27:08.571579 env[1736]: time="2024-02-12T20:27:08.571581166Z" level=warning msg="cleaning up after shim disconnected" id=57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17 namespace=k8s.io Feb 12 20:27:08.572015 env[1736]: time="2024-02-12T20:27:08.571606537Z" level=info msg="cleaning up dead shim" Feb 12 20:27:08.588710 env[1736]: time="2024-02-12T20:27:08.588623019Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2923 runtime=io.containerd.runc.v2\n" Feb 12 20:27:09.316773 env[1736]: time="2024-02-12T20:27:09.316717557Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:27:09.338249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093019066.mount: Deactivated successfully. Feb 12 20:27:09.347898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134356588.mount: Deactivated successfully. Feb 12 20:27:09.353841 env[1736]: time="2024-02-12T20:27:09.353759814Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\"" Feb 12 20:27:09.356382 env[1736]: time="2024-02-12T20:27:09.356332100Z" level=info msg="StartContainer for \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\"" Feb 12 20:27:09.391680 systemd[1]: Started cri-containerd-75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3.scope. Feb 12 20:27:09.451776 systemd[1]: cri-containerd-75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3.scope: Deactivated successfully. Feb 12 20:27:09.455242 env[1736]: time="2024-02-12T20:27:09.454777098Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd14fa5ec_0e1d_45f2_9380_48577bfb7fac.slice/cri-containerd-75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3.scope/memory.events\": no such file or directory" Feb 12 20:27:09.458749 env[1736]: time="2024-02-12T20:27:09.458687048Z" level=info msg="StartContainer for \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\" returns successfully" Feb 12 20:27:09.526348 kubelet[2167]: E0212 20:27:09.526275 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:09.560988 env[1736]: time="2024-02-12T20:27:09.560920257Z" level=info msg="shim disconnected" id=75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3 Feb 12 20:27:09.561200 env[1736]: time="2024-02-12T20:27:09.560989672Z" level=warning msg="cleaning up after shim disconnected" id=75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3 namespace=k8s.io Feb 12 20:27:09.561200 env[1736]: time="2024-02-12T20:27:09.561012558Z" level=info msg="cleaning up dead shim" Feb 12 20:27:09.575605 env[1736]: time="2024-02-12T20:27:09.575444103Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2981 runtime=io.containerd.runc.v2\n" Feb 12 20:27:10.325366 env[1736]: time="2024-02-12T20:27:10.325264444Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:27:10.349156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713342548.mount: Deactivated successfully. Feb 12 20:27:10.359347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073780493.mount: Deactivated successfully. Feb 12 20:27:10.366175 env[1736]: time="2024-02-12T20:27:10.366100745Z" level=info msg="CreateContainer within sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\"" Feb 12 20:27:10.366843 env[1736]: time="2024-02-12T20:27:10.366777717Z" level=info msg="StartContainer for \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\"" Feb 12 20:27:10.400527 systemd[1]: Started cri-containerd-4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878.scope. Feb 12 20:27:10.511953 env[1736]: time="2024-02-12T20:27:10.511843872Z" level=info msg="StartContainer for \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\" returns successfully" Feb 12 20:27:10.526817 kubelet[2167]: E0212 20:27:10.526753 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:10.665954 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 20:27:10.746570 kubelet[2167]: I0212 20:27:10.745401 2167 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:27:11.283998 kernel: Initializing XFRM netlink socket Feb 12 20:27:11.289029 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 20:27:11.350745 kubelet[2167]: I0212 20:27:11.350692 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bqcxl" podStartSLOduration=-9.223372010504164e+09 pod.CreationTimestamp="2024-02-12 20:26:45 +0000 UTC" firstStartedPulling="2024-02-12 20:26:48.312659995 +0000 UTC m=+18.101092541" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:11.349061747 +0000 UTC m=+41.137494341" watchObservedRunningTime="2024-02-12 20:27:11.35061134 +0000 UTC m=+41.139043934" Feb 12 20:27:11.527644 kubelet[2167]: E0212 20:27:11.527596 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:12.497224 kubelet[2167]: E0212 20:27:12.497161 2167 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:12.528851 kubelet[2167]: E0212 20:27:12.528791 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:13.108115 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 20:27:13.108254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:27:13.106609 systemd-networkd[1539]: cilium_host: Link UP Feb 12 20:27:13.106950 systemd-networkd[1539]: cilium_net: Link UP Feb 12 20:27:13.107280 systemd-networkd[1539]: cilium_net: Gained carrier Feb 12 20:27:13.109520 systemd-networkd[1539]: cilium_host: Gained carrier Feb 12 20:27:13.110581 (udev-worker)[3125]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:27:13.110741 (udev-worker)[3091]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:27:13.293109 systemd-networkd[1539]: cilium_vxlan: Link UP Feb 12 20:27:13.293125 systemd-networkd[1539]: cilium_vxlan: Gained carrier Feb 12 20:27:13.529311 kubelet[2167]: E0212 20:27:13.529254 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:13.704149 systemd-networkd[1539]: cilium_net: Gained IPv6LL Feb 12 20:27:13.825914 kernel: NET: Registered PF_ALG protocol family Feb 12 20:27:13.959101 systemd-networkd[1539]: cilium_host: Gained IPv6LL Feb 12 20:27:14.530218 kubelet[2167]: E0212 20:27:14.530149 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:15.124106 systemd-networkd[1539]: lxc_health: Link UP Feb 12 20:27:15.128368 (udev-worker)[3136]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:27:15.134178 systemd-networkd[1539]: lxc_health: Gained carrier Feb 12 20:27:15.134940 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:27:15.239122 systemd-networkd[1539]: cilium_vxlan: Gained IPv6LL Feb 12 20:27:15.531035 kubelet[2167]: E0212 20:27:15.530986 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:15.614852 kubelet[2167]: I0212 20:27:15.614786 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:27:15.626819 systemd[1]: Created slice kubepods-besteffort-pod8487ed95_c395_4c41_99a8_47c8cbd12d87.slice. Feb 12 20:27:15.717958 kubelet[2167]: I0212 20:27:15.717916 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmsrq\" (UniqueName: \"kubernetes.io/projected/8487ed95-c395-4c41-99a8-47c8cbd12d87-kube-api-access-lmsrq\") pod \"nginx-deployment-8ffc5cf85-8v7ts\" (UID: \"8487ed95-c395-4c41-99a8-47c8cbd12d87\") " pod="default/nginx-deployment-8ffc5cf85-8v7ts" Feb 12 20:27:15.936778 env[1736]: time="2024-02-12T20:27:15.936237435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-8v7ts,Uid:8487ed95-c395-4c41-99a8-47c8cbd12d87,Namespace:default,Attempt:0,}" Feb 12 20:27:16.028026 kernel: eth0: renamed from tmpa5fae Feb 12 20:27:16.033004 systemd-networkd[1539]: lxce02cb5148dc8: Link UP Feb 12 20:27:16.041035 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce02cb5148dc8: link becomes ready Feb 12 20:27:16.041682 systemd-networkd[1539]: lxce02cb5148dc8: Gained carrier Feb 12 20:27:16.532480 kubelet[2167]: E0212 20:27:16.532425 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:16.967140 systemd-networkd[1539]: lxc_health: Gained IPv6LL Feb 12 20:27:17.534058 kubelet[2167]: E0212 20:27:17.533961 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:17.863635 systemd-networkd[1539]: lxce02cb5148dc8: Gained IPv6LL Feb 12 20:27:18.534651 kubelet[2167]: E0212 20:27:18.534585 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:19.535100 kubelet[2167]: E0212 20:27:19.535055 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:20.536226 kubelet[2167]: E0212 20:27:20.536179 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:21.537366 kubelet[2167]: E0212 20:27:21.537313 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:22.538390 kubelet[2167]: E0212 20:27:22.538339 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:23.539854 kubelet[2167]: E0212 20:27:23.539784 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:24.495017 env[1736]: time="2024-02-12T20:27:24.494779533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:24.495612 env[1736]: time="2024-02-12T20:27:24.494852329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:24.495612 env[1736]: time="2024-02-12T20:27:24.495059257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:24.496126 env[1736]: time="2024-02-12T20:27:24.496007564Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5fae104a6f77ccd9e0c20892dac24ac01acafe15f758075f14a08008d51a51c pid=3502 runtime=io.containerd.runc.v2 Feb 12 20:27:24.540149 kubelet[2167]: E0212 20:27:24.539929 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:24.541635 systemd[1]: Started cri-containerd-a5fae104a6f77ccd9e0c20892dac24ac01acafe15f758075f14a08008d51a51c.scope. Feb 12 20:27:24.609188 env[1736]: time="2024-02-12T20:27:24.607808262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-8v7ts,Uid:8487ed95-c395-4c41-99a8-47c8cbd12d87,Namespace:default,Attempt:0,} returns sandbox id \"a5fae104a6f77ccd9e0c20892dac24ac01acafe15f758075f14a08008d51a51c\"" Feb 12 20:27:24.611592 env[1736]: time="2024-02-12T20:27:24.611542568Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:27:25.540622 kubelet[2167]: E0212 20:27:25.540560 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:26.541844 kubelet[2167]: E0212 20:27:26.541758 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:27.543031 kubelet[2167]: E0212 20:27:27.542928 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:28.502947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401215733.mount: Deactivated successfully. Feb 12 20:27:28.543298 kubelet[2167]: E0212 20:27:28.543226 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:29.543975 kubelet[2167]: E0212 20:27:29.543835 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:30.122361 env[1736]: time="2024-02-12T20:27:30.122299802Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:30.125018 env[1736]: time="2024-02-12T20:27:30.124964296Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:30.129961 env[1736]: time="2024-02-12T20:27:30.129881676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:30.135031 env[1736]: time="2024-02-12T20:27:30.134954116Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:30.138347 env[1736]: time="2024-02-12T20:27:30.136766903Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 20:27:30.141842 env[1736]: time="2024-02-12T20:27:30.141756059Z" level=info msg="CreateContainer within sandbox \"a5fae104a6f77ccd9e0c20892dac24ac01acafe15f758075f14a08008d51a51c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 20:27:30.226239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362612523.mount: Deactivated successfully. Feb 12 20:27:30.237441 env[1736]: time="2024-02-12T20:27:30.237364785Z" level=info msg="CreateContainer within sandbox \"a5fae104a6f77ccd9e0c20892dac24ac01acafe15f758075f14a08008d51a51c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"90112482a7c2a0832336d2706eec6cd026068defaf0620805cc079b7bfd20da3\"" Feb 12 20:27:30.238840 env[1736]: time="2024-02-12T20:27:30.238780820Z" level=info msg="StartContainer for \"90112482a7c2a0832336d2706eec6cd026068defaf0620805cc079b7bfd20da3\"" Feb 12 20:27:30.273339 systemd[1]: Started cri-containerd-90112482a7c2a0832336d2706eec6cd026068defaf0620805cc079b7bfd20da3.scope. Feb 12 20:27:30.340502 env[1736]: time="2024-02-12T20:27:30.340411918Z" level=info msg="StartContainer for \"90112482a7c2a0832336d2706eec6cd026068defaf0620805cc079b7bfd20da3\" returns successfully" Feb 12 20:27:30.384309 kubelet[2167]: I0212 20:27:30.384137 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-8v7ts" podStartSLOduration=-9.2233720214707e+09 pod.CreationTimestamp="2024-02-12 20:27:15 +0000 UTC" firstStartedPulling="2024-02-12 20:27:24.610798776 +0000 UTC m=+54.399231322" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:30.383664763 +0000 UTC m=+60.172097345" watchObservedRunningTime="2024-02-12 20:27:30.384076744 +0000 UTC m=+60.172509326" Feb 12 20:27:30.545304 kubelet[2167]: E0212 20:27:30.545230 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:31.545688 kubelet[2167]: E0212 20:27:31.545616 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:32.497079 kubelet[2167]: E0212 20:27:32.497013 2167 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:32.546440 kubelet[2167]: E0212 20:27:32.546402 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:33.548025 kubelet[2167]: E0212 20:27:33.547956 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:34.548434 kubelet[2167]: E0212 20:27:34.548355 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:34.777909 kubelet[2167]: I0212 20:27:34.777801 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:27:34.788059 systemd[1]: Created slice kubepods-besteffort-pod8c692bc0_12c7_4385_a9c2_c1a8c37dcf73.slice. Feb 12 20:27:34.843952 kubelet[2167]: I0212 20:27:34.843804 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4qs\" (UniqueName: \"kubernetes.io/projected/8c692bc0-12c7-4385-a9c2-c1a8c37dcf73-kube-api-access-tq4qs\") pod \"nfs-server-provisioner-0\" (UID: \"8c692bc0-12c7-4385-a9c2-c1a8c37dcf73\") " pod="default/nfs-server-provisioner-0" Feb 12 20:27:34.844232 kubelet[2167]: I0212 20:27:34.844207 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8c692bc0-12c7-4385-a9c2-c1a8c37dcf73-data\") pod \"nfs-server-provisioner-0\" (UID: \"8c692bc0-12c7-4385-a9c2-c1a8c37dcf73\") " pod="default/nfs-server-provisioner-0" Feb 12 20:27:35.094815 env[1736]: time="2024-02-12T20:27:35.094643874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8c692bc0-12c7-4385-a9c2-c1a8c37dcf73,Namespace:default,Attempt:0,}" Feb 12 20:27:35.180249 (udev-worker)[3622]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:27:35.181189 (udev-worker)[3621]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:27:35.187656 systemd-networkd[1539]: lxc0ff567d01465: Link UP Feb 12 20:27:35.200891 kernel: eth0: renamed from tmp37cd8 Feb 12 20:27:35.215381 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:27:35.215521 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0ff567d01465: link becomes ready Feb 12 20:27:35.216066 systemd-networkd[1539]: lxc0ff567d01465: Gained carrier Feb 12 20:27:35.549155 kubelet[2167]: E0212 20:27:35.549086 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:35.647885 env[1736]: time="2024-02-12T20:27:35.647711288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:35.648186 env[1736]: time="2024-02-12T20:27:35.648105686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:35.648408 env[1736]: time="2024-02-12T20:27:35.648346345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:35.648936 env[1736]: time="2024-02-12T20:27:35.648843995Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37cd8af351a91762587899e12188698b46c26ab7d556d8a9863d8e1ca8a84e45 pid=3677 runtime=io.containerd.runc.v2 Feb 12 20:27:35.683127 systemd[1]: Started cri-containerd-37cd8af351a91762587899e12188698b46c26ab7d556d8a9863d8e1ca8a84e45.scope. Feb 12 20:27:35.756003 env[1736]: time="2024-02-12T20:27:35.755943165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8c692bc0-12c7-4385-a9c2-c1a8c37dcf73,Namespace:default,Attempt:0,} returns sandbox id \"37cd8af351a91762587899e12188698b46c26ab7d556d8a9863d8e1ca8a84e45\"" Feb 12 20:27:35.758643 env[1736]: time="2024-02-12T20:27:35.758591782Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 20:27:35.989289 systemd[1]: run-containerd-runc-k8s.io-37cd8af351a91762587899e12188698b46c26ab7d556d8a9863d8e1ca8a84e45-runc.dhEUS7.mount: Deactivated successfully. Feb 12 20:27:36.549575 kubelet[2167]: E0212 20:27:36.549477 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:36.871269 systemd-networkd[1539]: lxc0ff567d01465: Gained IPv6LL Feb 12 20:27:37.549970 kubelet[2167]: E0212 20:27:37.549910 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:38.550746 kubelet[2167]: E0212 20:27:38.550673 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:39.447579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821075920.mount: Deactivated successfully. Feb 12 20:27:39.551533 kubelet[2167]: E0212 20:27:39.551446 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:40.552687 kubelet[2167]: E0212 20:27:40.552602 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:41.553841 kubelet[2167]: E0212 20:27:41.553773 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:42.554513 kubelet[2167]: E0212 20:27:42.554369 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:43.370338 env[1736]: time="2024-02-12T20:27:43.370274590Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:43.374322 env[1736]: time="2024-02-12T20:27:43.374258488Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:43.379358 env[1736]: time="2024-02-12T20:27:43.379279242Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:43.383271 env[1736]: time="2024-02-12T20:27:43.383192722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:43.385328 env[1736]: time="2024-02-12T20:27:43.385251994Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 12 20:27:43.391183 env[1736]: time="2024-02-12T20:27:43.391092969Z" level=info msg="CreateContainer within sandbox \"37cd8af351a91762587899e12188698b46c26ab7d556d8a9863d8e1ca8a84e45\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 20:27:43.412083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268450954.mount: Deactivated successfully. Feb 12 20:27:43.424451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189298819.mount: Deactivated successfully. Feb 12 20:27:43.432466 env[1736]: time="2024-02-12T20:27:43.432374382Z" level=info msg="CreateContainer within sandbox \"37cd8af351a91762587899e12188698b46c26ab7d556d8a9863d8e1ca8a84e45\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"cf113b5593975250a323fb5d2448d4499a2858fcfd492a523ba552728718025e\"" Feb 12 20:27:43.433835 env[1736]: time="2024-02-12T20:27:43.433762754Z" level=info msg="StartContainer for \"cf113b5593975250a323fb5d2448d4499a2858fcfd492a523ba552728718025e\"" Feb 12 20:27:43.471145 systemd[1]: Started cri-containerd-cf113b5593975250a323fb5d2448d4499a2858fcfd492a523ba552728718025e.scope. Feb 12 20:27:43.542580 env[1736]: time="2024-02-12T20:27:43.542473437Z" level=info msg="StartContainer for \"cf113b5593975250a323fb5d2448d4499a2858fcfd492a523ba552728718025e\" returns successfully" Feb 12 20:27:43.555417 kubelet[2167]: E0212 20:27:43.555372 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:44.432053 kubelet[2167]: I0212 20:27:44.431990 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.22337202642284e+09 pod.CreationTimestamp="2024-02-12 20:27:34 +0000 UTC" firstStartedPulling="2024-02-12 20:27:35.758142038 +0000 UTC m=+65.546574584" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:44.4288397 +0000 UTC m=+74.217272270" watchObservedRunningTime="2024-02-12 20:27:44.431936498 +0000 UTC m=+74.220369092" Feb 12 20:27:44.556400 kubelet[2167]: E0212 20:27:44.556346 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:45.558161 kubelet[2167]: E0212 20:27:45.558095 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:46.559239 kubelet[2167]: E0212 20:27:46.559162 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:47.559381 kubelet[2167]: E0212 20:27:47.559306 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:48.560481 kubelet[2167]: E0212 20:27:48.560438 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:49.562273 kubelet[2167]: E0212 20:27:49.562225 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:50.563604 kubelet[2167]: E0212 20:27:50.563529 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:51.564674 kubelet[2167]: E0212 20:27:51.564626 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:52.497368 kubelet[2167]: E0212 20:27:52.497304 2167 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:52.566154 kubelet[2167]: E0212 20:27:52.566102 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:53.567231 kubelet[2167]: E0212 20:27:53.567178 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:53.755500 kubelet[2167]: I0212 20:27:53.755367 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:27:53.767336 systemd[1]: Created slice kubepods-besteffort-pod83bb9120_2518_4011_9838_f49d768754fe.slice. Feb 12 20:27:53.868519 kubelet[2167]: I0212 20:27:53.867998 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dad517b0-e720-4b54-b1a4-2499edb80628\" (UniqueName: \"kubernetes.io/nfs/83bb9120-2518-4011-9838-f49d768754fe-pvc-dad517b0-e720-4b54-b1a4-2499edb80628\") pod \"test-pod-1\" (UID: \"83bb9120-2518-4011-9838-f49d768754fe\") " pod="default/test-pod-1" Feb 12 20:27:53.868981 kubelet[2167]: I0212 20:27:53.868826 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phl6x\" (UniqueName: \"kubernetes.io/projected/83bb9120-2518-4011-9838-f49d768754fe-kube-api-access-phl6x\") pod \"test-pod-1\" (UID: \"83bb9120-2518-4011-9838-f49d768754fe\") " pod="default/test-pod-1" Feb 12 20:27:54.053934 kernel: FS-Cache: Loaded Feb 12 20:27:54.151186 kernel: RPC: Registered named UNIX socket transport module. Feb 12 20:27:54.151345 kernel: RPC: Registered udp transport module. Feb 12 20:27:54.156042 kernel: RPC: Registered tcp transport module. Feb 12 20:27:54.156183 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 20:27:54.281912 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 20:27:54.540822 kernel: NFS: Registering the id_resolver key type Feb 12 20:27:54.540999 kernel: Key type id_resolver registered Feb 12 20:27:54.541057 kernel: Key type id_legacy registered Feb 12 20:27:54.568885 kubelet[2167]: E0212 20:27:54.568794 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:54.695216 nfsidmap[3819]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 20:27:54.700653 nfsidmap[3820]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 20:27:54.974906 env[1736]: time="2024-02-12T20:27:54.974724280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:83bb9120-2518-4011-9838-f49d768754fe,Namespace:default,Attempt:0,}" Feb 12 20:27:55.031325 (udev-worker)[3808]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:27:55.031326 (udev-worker)[3814]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:27:55.034301 systemd-networkd[1539]: lxce7a04634e11e: Link UP Feb 12 20:27:55.049940 kernel: eth0: renamed from tmpc991d Feb 12 20:27:55.063340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:27:55.063490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce7a04634e11e: link becomes ready Feb 12 20:27:55.063719 systemd-networkd[1539]: lxce7a04634e11e: Gained carrier Feb 12 20:27:55.520717 env[1736]: time="2024-02-12T20:27:55.520355998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:55.520717 env[1736]: time="2024-02-12T20:27:55.520431703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:55.520717 env[1736]: time="2024-02-12T20:27:55.520458042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:55.521125 env[1736]: time="2024-02-12T20:27:55.520795336Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c991d3fac5af43715b5236b25d77b379393986d436c39432d239fe85662ef7df pid=3849 runtime=io.containerd.runc.v2 Feb 12 20:27:55.553342 systemd[1]: Started cri-containerd-c991d3fac5af43715b5236b25d77b379393986d436c39432d239fe85662ef7df.scope. Feb 12 20:27:55.569758 kubelet[2167]: E0212 20:27:55.569680 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:55.635650 env[1736]: time="2024-02-12T20:27:55.635580302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:83bb9120-2518-4011-9838-f49d768754fe,Namespace:default,Attempt:0,} returns sandbox id \"c991d3fac5af43715b5236b25d77b379393986d436c39432d239fe85662ef7df\"" Feb 12 20:27:55.638931 env[1736]: time="2024-02-12T20:27:55.638825656Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:27:55.991591 env[1736]: time="2024-02-12T20:27:55.991532928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:56.000988 env[1736]: time="2024-02-12T20:27:56.000914705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:56.004215 env[1736]: time="2024-02-12T20:27:56.004149735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:56.007705 env[1736]: time="2024-02-12T20:27:56.007651586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:56.008975 env[1736]: time="2024-02-12T20:27:56.008904902Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 20:27:56.013008 env[1736]: time="2024-02-12T20:27:56.012953657Z" level=info msg="CreateContainer within sandbox \"c991d3fac5af43715b5236b25d77b379393986d436c39432d239fe85662ef7df\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 20:27:56.037714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658670543.mount: Deactivated successfully. Feb 12 20:27:56.047922 env[1736]: time="2024-02-12T20:27:56.047825990Z" level=info msg="CreateContainer within sandbox \"c991d3fac5af43715b5236b25d77b379393986d436c39432d239fe85662ef7df\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"23bf8a6147f506957e17c1af068514ded9d76236fdd4306e1ee82bf61437daca\"" Feb 12 20:27:56.049424 env[1736]: time="2024-02-12T20:27:56.049331609Z" level=info msg="StartContainer for \"23bf8a6147f506957e17c1af068514ded9d76236fdd4306e1ee82bf61437daca\"" Feb 12 20:27:56.086933 systemd[1]: Started cri-containerd-23bf8a6147f506957e17c1af068514ded9d76236fdd4306e1ee82bf61437daca.scope. Feb 12 20:27:56.151086 env[1736]: time="2024-02-12T20:27:56.151017828Z" level=info msg="StartContainer for \"23bf8a6147f506957e17c1af068514ded9d76236fdd4306e1ee82bf61437daca\" returns successfully" Feb 12 20:27:56.569994 kubelet[2167]: E0212 20:27:56.569926 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:56.903203 systemd-networkd[1539]: lxce7a04634e11e: Gained IPv6LL Feb 12 20:27:57.028520 systemd[1]: run-containerd-runc-k8s.io-23bf8a6147f506957e17c1af068514ded9d76236fdd4306e1ee82bf61437daca-runc.MiUryG.mount: Deactivated successfully. Feb 12 20:27:57.570663 kubelet[2167]: E0212 20:27:57.570618 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:58.572396 kubelet[2167]: E0212 20:27:58.572314 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:59.572607 kubelet[2167]: E0212 20:27:59.572535 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:00.573521 kubelet[2167]: E0212 20:28:00.573474 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:01.575125 kubelet[2167]: E0212 20:28:01.575051 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:01.609902 kubelet[2167]: I0212 20:28:01.609822 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372010245617e+09 pod.CreationTimestamp="2024-02-12 20:27:35 +0000 UTC" firstStartedPulling="2024-02-12 20:27:55.637622704 +0000 UTC m=+85.426055250" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:56.456362902 +0000 UTC m=+86.244795484" watchObservedRunningTime="2024-02-12 20:28:01.609158407 +0000 UTC m=+91.397590977" Feb 12 20:28:01.643599 systemd[1]: run-containerd-runc-k8s.io-4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878-runc.QdiiS6.mount: Deactivated successfully. Feb 12 20:28:01.680855 env[1736]: time="2024-02-12T20:28:01.680699439Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:28:01.692131 env[1736]: time="2024-02-12T20:28:01.692062021Z" level=info msg="StopContainer for \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\" with timeout 1 (s)" Feb 12 20:28:01.692824 env[1736]: time="2024-02-12T20:28:01.692763233Z" level=info msg="Stop container \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\" with signal terminated" Feb 12 20:28:01.706169 systemd-networkd[1539]: lxc_health: Link DOWN Feb 12 20:28:01.706192 systemd-networkd[1539]: lxc_health: Lost carrier Feb 12 20:28:01.737752 systemd[1]: cri-containerd-4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878.scope: Deactivated successfully. Feb 12 20:28:01.738394 systemd[1]: cri-containerd-4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878.scope: Consumed 14.942s CPU time. Feb 12 20:28:01.772647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878-rootfs.mount: Deactivated successfully. Feb 12 20:28:02.575717 kubelet[2167]: E0212 20:28:02.575647 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:02.657367 kubelet[2167]: E0212 20:28:02.657311 2167 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:28:02.708893 env[1736]: time="2024-02-12T20:28:02.708797986Z" level=info msg="Kill container \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\"" Feb 12 20:28:02.853017 env[1736]: time="2024-02-12T20:28:02.852308559Z" level=info msg="shim disconnected" id=4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878 Feb 12 20:28:02.853017 env[1736]: time="2024-02-12T20:28:02.852428604Z" level=warning msg="cleaning up after shim disconnected" id=4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878 namespace=k8s.io Feb 12 20:28:02.853017 env[1736]: time="2024-02-12T20:28:02.852479722Z" level=info msg="cleaning up dead shim" Feb 12 20:28:02.869740 env[1736]: time="2024-02-12T20:28:02.869661493Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3981 runtime=io.containerd.runc.v2\n" Feb 12 20:28:02.872559 env[1736]: time="2024-02-12T20:28:02.872483625Z" level=info msg="StopContainer for \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\" returns successfully" Feb 12 20:28:02.873423 env[1736]: time="2024-02-12T20:28:02.873377097Z" level=info msg="StopPodSandbox for \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\"" Feb 12 20:28:02.873741 env[1736]: time="2024-02-12T20:28:02.873700512Z" level=info msg="Container to stop \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:02.873993 env[1736]: time="2024-02-12T20:28:02.873957257Z" level=info msg="Container to stop \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:02.874142 env[1736]: time="2024-02-12T20:28:02.874108141Z" level=info msg="Container to stop \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:02.874355 env[1736]: time="2024-02-12T20:28:02.874321423Z" level=info msg="Container to stop \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:02.874485 env[1736]: time="2024-02-12T20:28:02.874451908Z" level=info msg="Container to stop \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:02.877457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9-shm.mount: Deactivated successfully. Feb 12 20:28:02.888684 systemd[1]: cri-containerd-6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9.scope: Deactivated successfully. Feb 12 20:28:02.925429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9-rootfs.mount: Deactivated successfully. Feb 12 20:28:02.938703 env[1736]: time="2024-02-12T20:28:02.938641639Z" level=info msg="shim disconnected" id=6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9 Feb 12 20:28:02.939142 env[1736]: time="2024-02-12T20:28:02.939101407Z" level=warning msg="cleaning up after shim disconnected" id=6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9 namespace=k8s.io Feb 12 20:28:02.939284 env[1736]: time="2024-02-12T20:28:02.939254966Z" level=info msg="cleaning up dead shim" Feb 12 20:28:02.954008 env[1736]: time="2024-02-12T20:28:02.953951137Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4012 runtime=io.containerd.runc.v2\n" Feb 12 20:28:02.954778 env[1736]: time="2024-02-12T20:28:02.954729796Z" level=info msg="TearDown network for sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" successfully" Feb 12 20:28:02.954999 env[1736]: time="2024-02-12T20:28:02.954963033Z" level=info msg="StopPodSandbox for \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" returns successfully" Feb 12 20:28:03.023221 kubelet[2167]: I0212 20:28:03.022386 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-lib-modules\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.023221 kubelet[2167]: I0212 20:28:03.022453 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-xtables-lock\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.023221 kubelet[2167]: I0212 20:28:03.022493 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-etc-cni-netd\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.023221 kubelet[2167]: I0212 20:28:03.022520 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.023221 kubelet[2167]: I0212 20:28:03.022546 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-hubble-tls\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.023221 kubelet[2167]: I0212 20:28:03.022462 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.023730 kubelet[2167]: I0212 20:28:03.022595 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n859h\" (UniqueName: \"kubernetes.io/projected/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-kube-api-access-n859h\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.023730 kubelet[2167]: I0212 20:28:03.022673 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-host-proc-sys-net\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.023730 kubelet[2167]: I0212 20:28:03.022715 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cni-path\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.023730 kubelet[2167]: I0212 20:28:03.022779 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-cgroup\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.023730 kubelet[2167]: I0212 20:28:03.022850 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-config-path\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.023730 kubelet[2167]: I0212 20:28:03.022953 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-run\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.024126 kubelet[2167]: I0212 20:28:03.023035 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-bpf-maps\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.024126 kubelet[2167]: I0212 20:28:03.023118 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-hostproc\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.024126 kubelet[2167]: I0212 20:28:03.023189 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-clustermesh-secrets\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.024126 kubelet[2167]: I0212 20:28:03.023234 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-host-proc-sys-kernel\") pod \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\" (UID: \"d14fa5ec-0e1d-45f2-9380-48577bfb7fac\") " Feb 12 20:28:03.024126 kubelet[2167]: I0212 20:28:03.023312 2167 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-lib-modules\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.024126 kubelet[2167]: I0212 20:28:03.023366 2167 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-xtables-lock\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.024471 kubelet[2167]: I0212 20:28:03.023412 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.024471 kubelet[2167]: I0212 20:28:03.023491 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.024471 kubelet[2167]: I0212 20:28:03.023558 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.024471 kubelet[2167]: I0212 20:28:03.023599 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cni-path" (OuterVolumeSpecName: "cni-path") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.024471 kubelet[2167]: I0212 20:28:03.023664 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.024774 kubelet[2167]: W0212 20:28:03.024033 2167 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d14fa5ec-0e1d-45f2-9380-48577bfb7fac/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:28:03.031462 kubelet[2167]: I0212 20:28:03.031275 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:28:03.032525 kubelet[2167]: I0212 20:28:03.032476 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-kube-api-access-n859h" (OuterVolumeSpecName: "kube-api-access-n859h") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "kube-api-access-n859h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:28:03.032770 kubelet[2167]: I0212 20:28:03.032740 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-hostproc" (OuterVolumeSpecName: "hostproc") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.032971 kubelet[2167]: I0212 20:28:03.032916 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.034443 systemd[1]: var-lib-kubelet-pods-d14fa5ec\x2d0e1d\x2d45f2\x2d9380\x2d48577bfb7fac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn859h.mount: Deactivated successfully. Feb 12 20:28:03.036907 kubelet[2167]: I0212 20:28:03.036779 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:03.044274 systemd[1]: var-lib-kubelet-pods-d14fa5ec\x2d0e1d\x2d45f2\x2d9380\x2d48577bfb7fac-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:28:03.046054 kubelet[2167]: I0212 20:28:03.046003 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:28:03.047179 kubelet[2167]: I0212 20:28:03.047131 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d14fa5ec-0e1d-45f2-9380-48577bfb7fac" (UID: "d14fa5ec-0e1d-45f2-9380-48577bfb7fac"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:28:03.047933 systemd[1]: var-lib-kubelet-pods-d14fa5ec\x2d0e1d\x2d45f2\x2d9380\x2d48577bfb7fac-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:28:03.125503 kubelet[2167]: I0212 20:28:03.124328 2167 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-config-path\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.125503 kubelet[2167]: I0212 20:28:03.124964 2167 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-run\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.125503 kubelet[2167]: I0212 20:28:03.124998 2167 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-bpf-maps\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.125503 kubelet[2167]: I0212 20:28:03.125071 2167 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-hostproc\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.125503 kubelet[2167]: I0212 20:28:03.125116 2167 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cilium-cgroup\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.125503 kubelet[2167]: I0212 20:28:03.125146 2167 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-clustermesh-secrets\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.125503 kubelet[2167]: I0212 20:28:03.125203 2167 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-host-proc-sys-kernel\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.125503 kubelet[2167]: I0212 20:28:03.125233 2167 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-etc-cni-netd\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.126103 kubelet[2167]: I0212 20:28:03.125288 2167 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-hubble-tls\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.126103 kubelet[2167]: I0212 20:28:03.125312 2167 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-host-proc-sys-net\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.126103 kubelet[2167]: I0212 20:28:03.125352 2167 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-cni-path\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.126103 kubelet[2167]: I0212 20:28:03.125375 2167 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-n859h\" (UniqueName: \"kubernetes.io/projected/d14fa5ec-0e1d-45f2-9380-48577bfb7fac-kube-api-access-n859h\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:03.205891 systemd[1]: Removed slice kubepods-burstable-podd14fa5ec_0e1d_45f2_9380_48577bfb7fac.slice. Feb 12 20:28:03.206102 systemd[1]: kubepods-burstable-podd14fa5ec_0e1d_45f2_9380_48577bfb7fac.slice: Consumed 15.154s CPU time. Feb 12 20:28:03.463465 kubelet[2167]: I0212 20:28:03.463331 2167 scope.go:115] "RemoveContainer" containerID="4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878" Feb 12 20:28:03.469005 env[1736]: time="2024-02-12T20:28:03.468935852Z" level=info msg="RemoveContainer for \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\"" Feb 12 20:28:03.479254 env[1736]: time="2024-02-12T20:28:03.479034759Z" level=info msg="RemoveContainer for \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\" returns successfully" Feb 12 20:28:03.479545 kubelet[2167]: I0212 20:28:03.479495 2167 scope.go:115] "RemoveContainer" containerID="75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3" Feb 12 20:28:03.482482 env[1736]: time="2024-02-12T20:28:03.482049946Z" level=info msg="RemoveContainer for \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\"" Feb 12 20:28:03.486877 env[1736]: time="2024-02-12T20:28:03.486792153Z" level=info msg="RemoveContainer for \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\" returns successfully" Feb 12 20:28:03.487498 kubelet[2167]: I0212 20:28:03.487464 2167 scope.go:115] "RemoveContainer" containerID="57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17" Feb 12 20:28:03.493387 env[1736]: time="2024-02-12T20:28:03.493307796Z" level=info msg="RemoveContainer for \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\"" Feb 12 20:28:03.498808 env[1736]: time="2024-02-12T20:28:03.498713742Z" level=info msg="RemoveContainer for \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\" returns successfully" Feb 12 20:28:03.499161 kubelet[2167]: I0212 20:28:03.499112 2167 scope.go:115] "RemoveContainer" containerID="3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43" Feb 12 20:28:03.501699 env[1736]: time="2024-02-12T20:28:03.501231302Z" level=info msg="RemoveContainer for \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\"" Feb 12 20:28:03.505976 env[1736]: time="2024-02-12T20:28:03.505912443Z" level=info msg="RemoveContainer for \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\" returns successfully" Feb 12 20:28:03.506510 kubelet[2167]: I0212 20:28:03.506471 2167 scope.go:115] "RemoveContainer" containerID="3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61" Feb 12 20:28:03.509003 env[1736]: time="2024-02-12T20:28:03.508934974Z" level=info msg="RemoveContainer for \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\"" Feb 12 20:28:03.514663 env[1736]: time="2024-02-12T20:28:03.514589974Z" level=info msg="RemoveContainer for \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\" returns successfully" Feb 12 20:28:03.515319 kubelet[2167]: I0212 20:28:03.515273 2167 scope.go:115] "RemoveContainer" containerID="4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878" Feb 12 20:28:03.515968 env[1736]: time="2024-02-12T20:28:03.515800228Z" level=error msg="ContainerStatus for \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\": not found" Feb 12 20:28:03.516513 kubelet[2167]: E0212 20:28:03.516467 2167 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\": not found" containerID="4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878" Feb 12 20:28:03.516682 kubelet[2167]: I0212 20:28:03.516543 2167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878} err="failed to get container status \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\": rpc error: code = NotFound desc = an error occurred when try to find container \"4753930f426e461f2733736a41f096a2b320ad97bf7e73a5c7681fc57c50e878\": not found" Feb 12 20:28:03.516682 kubelet[2167]: I0212 20:28:03.516570 2167 scope.go:115] "RemoveContainer" containerID="75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3" Feb 12 20:28:03.517064 env[1736]: time="2024-02-12T20:28:03.516957562Z" level=error msg="ContainerStatus for \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\": not found" Feb 12 20:28:03.517356 kubelet[2167]: E0212 20:28:03.517307 2167 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\": not found" containerID="75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3" Feb 12 20:28:03.517513 kubelet[2167]: I0212 20:28:03.517380 2167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3} err="failed to get container status \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\": rpc error: code = NotFound desc = an error occurred when try to find container \"75093cfd10c943b110ec2aa03b20a28d7a88b3220fe403341e9a075d9e388eb3\": not found" Feb 12 20:28:03.517513 kubelet[2167]: I0212 20:28:03.517408 2167 scope.go:115] "RemoveContainer" containerID="57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17" Feb 12 20:28:03.518270 env[1736]: time="2024-02-12T20:28:03.518165667Z" level=error msg="ContainerStatus for \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\": not found" Feb 12 20:28:03.518797 kubelet[2167]: E0212 20:28:03.518748 2167 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\": not found" containerID="57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17" Feb 12 20:28:03.519010 kubelet[2167]: I0212 20:28:03.518813 2167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17} err="failed to get container status \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\": rpc error: code = NotFound desc = an error occurred when try to find container \"57d100d4ec9f663f52636ce0b2c8a6c3b7e0b861d6cebced03a780368bc3dd17\": not found" Feb 12 20:28:03.519010 kubelet[2167]: I0212 20:28:03.518839 2167 scope.go:115] "RemoveContainer" containerID="3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43" Feb 12 20:28:03.519527 env[1736]: time="2024-02-12T20:28:03.519428503Z" level=error msg="ContainerStatus for \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\": not found" Feb 12 20:28:03.520134 kubelet[2167]: E0212 20:28:03.520077 2167 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\": not found" containerID="3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43" Feb 12 20:28:03.520354 kubelet[2167]: I0212 20:28:03.520151 2167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43} err="failed to get container status \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a3bd076c64e74f50062f4d612a85aabbaa73fd8782096f8c44494c9f52a8a43\": not found" Feb 12 20:28:03.520354 kubelet[2167]: I0212 20:28:03.520178 2167 scope.go:115] "RemoveContainer" containerID="3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61" Feb 12 20:28:03.520909 env[1736]: time="2024-02-12T20:28:03.520773237Z" level=error msg="ContainerStatus for \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\": not found" Feb 12 20:28:03.521459 kubelet[2167]: E0212 20:28:03.521417 2167 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\": not found" containerID="3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61" Feb 12 20:28:03.521649 kubelet[2167]: I0212 20:28:03.521486 2167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61} err="failed to get container status \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fb40d0ba4b36472a2005ffafe1e55786d33afdab453a27ccb172e557e531c61\": not found" Feb 12 20:28:03.576117 kubelet[2167]: E0212 20:28:03.576047 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:04.576811 kubelet[2167]: E0212 20:28:04.576766 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:05.197075 kubelet[2167]: I0212 20:28:05.197010 2167 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d14fa5ec-0e1d-45f2-9380-48577bfb7fac path="/var/lib/kubelet/pods/d14fa5ec-0e1d-45f2-9380-48577bfb7fac/volumes" Feb 12 20:28:05.514149 kubelet[2167]: I0212 20:28:05.514098 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:28:05.514427 kubelet[2167]: E0212 20:28:05.514392 2167 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d14fa5ec-0e1d-45f2-9380-48577bfb7fac" containerName="mount-cgroup" Feb 12 20:28:05.514548 kubelet[2167]: E0212 20:28:05.514528 2167 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d14fa5ec-0e1d-45f2-9380-48577bfb7fac" containerName="apply-sysctl-overwrites" Feb 12 20:28:05.514695 kubelet[2167]: E0212 20:28:05.514676 2167 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d14fa5ec-0e1d-45f2-9380-48577bfb7fac" containerName="cilium-agent" Feb 12 20:28:05.514853 kubelet[2167]: E0212 20:28:05.514833 2167 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d14fa5ec-0e1d-45f2-9380-48577bfb7fac" containerName="mount-bpf-fs" Feb 12 20:28:05.515028 kubelet[2167]: E0212 20:28:05.514990 2167 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d14fa5ec-0e1d-45f2-9380-48577bfb7fac" containerName="clean-cilium-state" Feb 12 20:28:05.515188 kubelet[2167]: I0212 20:28:05.515167 2167 memory_manager.go:346] "RemoveStaleState removing state" podUID="d14fa5ec-0e1d-45f2-9380-48577bfb7fac" containerName="cilium-agent" Feb 12 20:28:05.516986 kubelet[2167]: I0212 20:28:05.516950 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:28:05.526893 systemd[1]: Created slice kubepods-besteffort-podd106aca1_151e_4bc4_811e_0e98dd31a290.slice. Feb 12 20:28:05.539280 systemd[1]: Created slice kubepods-burstable-podd01556ba_75a0_4847_aa98_e43df474c2f0.slice. Feb 12 20:28:05.540136 kubelet[2167]: I0212 20:28:05.540091 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-hostproc\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.540395 kubelet[2167]: I0212 20:28:05.540370 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cni-path\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.540554 kubelet[2167]: I0212 20:28:05.540531 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d106aca1-151e-4bc4-811e-0e98dd31a290-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-vpxvh\" (UID: \"d106aca1-151e-4bc4-811e-0e98dd31a290\") " pod="kube-system/cilium-operator-f59cbd8c6-vpxvh" Feb 12 20:28:05.540740 kubelet[2167]: I0212 20:28:05.540717 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2db4r\" (UniqueName: \"kubernetes.io/projected/d106aca1-151e-4bc4-811e-0e98dd31a290-kube-api-access-2db4r\") pod \"cilium-operator-f59cbd8c6-vpxvh\" (UID: \"d106aca1-151e-4bc4-811e-0e98dd31a290\") " pod="kube-system/cilium-operator-f59cbd8c6-vpxvh" Feb 12 20:28:05.541058 kubelet[2167]: I0212 20:28:05.541024 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-cgroup\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541148 kubelet[2167]: I0212 20:28:05.541087 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-etc-cni-netd\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541148 kubelet[2167]: I0212 20:28:05.541139 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d01556ba-75a0-4847-aa98-e43df474c2f0-clustermesh-secrets\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541284 kubelet[2167]: I0212 20:28:05.541186 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-host-proc-sys-net\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541284 kubelet[2167]: I0212 20:28:05.541235 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-host-proc-sys-kernel\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541284 kubelet[2167]: I0212 20:28:05.541279 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-lib-modules\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541522 kubelet[2167]: I0212 20:28:05.541326 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-run\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541522 kubelet[2167]: I0212 20:28:05.541367 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-bpf-maps\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541522 kubelet[2167]: I0212 20:28:05.541416 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-xtables-lock\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541522 kubelet[2167]: I0212 20:28:05.541461 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-config-path\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541790 kubelet[2167]: I0212 20:28:05.541530 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-ipsec-secrets\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541790 kubelet[2167]: I0212 20:28:05.541573 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d01556ba-75a0-4847-aa98-e43df474c2f0-hubble-tls\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.541790 kubelet[2167]: I0212 20:28:05.541622 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkmxq\" (UniqueName: \"kubernetes.io/projected/d01556ba-75a0-4847-aa98-e43df474c2f0-kube-api-access-mkmxq\") pod \"cilium-6xj28\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " pod="kube-system/cilium-6xj28" Feb 12 20:28:05.577945 kubelet[2167]: E0212 20:28:05.577899 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:05.838803 env[1736]: time="2024-02-12T20:28:05.838113656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-vpxvh,Uid:d106aca1-151e-4bc4-811e-0e98dd31a290,Namespace:kube-system,Attempt:0,}" Feb 12 20:28:05.853437 env[1736]: time="2024-02-12T20:28:05.853361655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xj28,Uid:d01556ba-75a0-4847-aa98-e43df474c2f0,Namespace:kube-system,Attempt:0,}" Feb 12 20:28:05.869184 env[1736]: time="2024-02-12T20:28:05.868717808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:28:05.869184 env[1736]: time="2024-02-12T20:28:05.868790659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:28:05.869184 env[1736]: time="2024-02-12T20:28:05.868817322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:28:05.869551 env[1736]: time="2024-02-12T20:28:05.869157994Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfbe6b2616348ee9acecc3b4c51d79eebc91497f89578cce95ad2593dd0cda5e pid=4041 runtime=io.containerd.runc.v2 Feb 12 20:28:05.880069 env[1736]: time="2024-02-12T20:28:05.879931822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:28:05.880277 env[1736]: time="2024-02-12T20:28:05.880017212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:28:05.880277 env[1736]: time="2024-02-12T20:28:05.880044475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:28:05.880508 env[1736]: time="2024-02-12T20:28:05.880274954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2 pid=4062 runtime=io.containerd.runc.v2 Feb 12 20:28:05.896522 systemd[1]: Started cri-containerd-dfbe6b2616348ee9acecc3b4c51d79eebc91497f89578cce95ad2593dd0cda5e.scope. Feb 12 20:28:05.926538 systemd[1]: Started cri-containerd-037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2.scope. Feb 12 20:28:05.995916 env[1736]: time="2024-02-12T20:28:05.995824163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-vpxvh,Uid:d106aca1-151e-4bc4-811e-0e98dd31a290,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfbe6b2616348ee9acecc3b4c51d79eebc91497f89578cce95ad2593dd0cda5e\"" Feb 12 20:28:05.999175 env[1736]: time="2024-02-12T20:28:05.998961433Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:28:06.010756 env[1736]: time="2024-02-12T20:28:06.010699356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xj28,Uid:d01556ba-75a0-4847-aa98-e43df474c2f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\"" Feb 12 20:28:06.016666 env[1736]: time="2024-02-12T20:28:06.016594413Z" level=info msg="CreateContainer within sandbox \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:28:06.039397 env[1736]: time="2024-02-12T20:28:06.039304491Z" level=info msg="CreateContainer within sandbox \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9\"" Feb 12 20:28:06.040143 env[1736]: time="2024-02-12T20:28:06.040039835Z" level=info msg="StartContainer for \"f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9\"" Feb 12 20:28:06.067969 systemd[1]: Started cri-containerd-f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9.scope. Feb 12 20:28:06.097502 systemd[1]: cri-containerd-f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9.scope: Deactivated successfully. Feb 12 20:28:06.165350 env[1736]: time="2024-02-12T20:28:06.165282828Z" level=info msg="shim disconnected" id=f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9 Feb 12 20:28:06.165806 env[1736]: time="2024-02-12T20:28:06.165772910Z" level=warning msg="cleaning up after shim disconnected" id=f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9 namespace=k8s.io Feb 12 20:28:06.165999 env[1736]: time="2024-02-12T20:28:06.165970102Z" level=info msg="cleaning up dead shim" Feb 12 20:28:06.180614 env[1736]: time="2024-02-12T20:28:06.180543618Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4141 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:28:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:28:06.181437 env[1736]: time="2024-02-12T20:28:06.181290866Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Feb 12 20:28:06.183097 env[1736]: time="2024-02-12T20:28:06.183021182Z" level=error msg="Failed to pipe stderr of container \"f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9\"" error="reading from a closed fifo" Feb 12 20:28:06.183097 env[1736]: time="2024-02-12T20:28:06.183043322Z" level=error msg="Failed to pipe stdout of container \"f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9\"" error="reading from a closed fifo" Feb 12 20:28:06.185619 env[1736]: time="2024-02-12T20:28:06.185522590Z" level=error msg="StartContainer for \"f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:28:06.186513 kubelet[2167]: E0212 20:28:06.186019 2167 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9" Feb 12 20:28:06.186513 kubelet[2167]: E0212 20:28:06.186184 2167 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:28:06.186513 kubelet[2167]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:28:06.186513 kubelet[2167]: rm /hostbin/cilium-mount Feb 12 20:28:06.187008 kubelet[2167]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mkmxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-6xj28_kube-system(d01556ba-75a0-4847-aa98-e43df474c2f0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:28:06.187174 kubelet[2167]: E0212 20:28:06.186249 2167 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6xj28" podUID=d01556ba-75a0-4847-aa98-e43df474c2f0 Feb 12 20:28:06.475927 env[1736]: time="2024-02-12T20:28:06.474537828Z" level=info msg="StopPodSandbox for \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\"" Feb 12 20:28:06.475927 env[1736]: time="2024-02-12T20:28:06.474678873Z" level=info msg="Container to stop \"f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:06.486730 systemd[1]: cri-containerd-037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2.scope: Deactivated successfully. Feb 12 20:28:06.534034 env[1736]: time="2024-02-12T20:28:06.533936396Z" level=info msg="shim disconnected" id=037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2 Feb 12 20:28:06.534034 env[1736]: time="2024-02-12T20:28:06.534028986Z" level=warning msg="cleaning up after shim disconnected" id=037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2 namespace=k8s.io Feb 12 20:28:06.534375 env[1736]: time="2024-02-12T20:28:06.534053069Z" level=info msg="cleaning up dead shim" Feb 12 20:28:06.549638 env[1736]: time="2024-02-12T20:28:06.549559373Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4172 runtime=io.containerd.runc.v2\n" Feb 12 20:28:06.550239 env[1736]: time="2024-02-12T20:28:06.550172585Z" level=info msg="TearDown network for sandbox \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\" successfully" Feb 12 20:28:06.550372 env[1736]: time="2024-02-12T20:28:06.550238655Z" level=info msg="StopPodSandbox for \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\" returns successfully" Feb 12 20:28:06.579686 kubelet[2167]: E0212 20:28:06.579627 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:06.654614 kubelet[2167]: I0212 20:28:06.653043 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cni-path\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.654614 kubelet[2167]: I0212 20:28:06.653166 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkmxq\" (UniqueName: \"kubernetes.io/projected/d01556ba-75a0-4847-aa98-e43df474c2f0-kube-api-access-mkmxq\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.654614 kubelet[2167]: I0212 20:28:06.653253 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-cgroup\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.654614 kubelet[2167]: I0212 20:28:06.653299 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-host-proc-sys-net\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.654614 kubelet[2167]: I0212 20:28:06.653365 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-host-proc-sys-kernel\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.654614 kubelet[2167]: I0212 20:28:06.653429 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-lib-modules\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655154 kubelet[2167]: I0212 20:28:06.653470 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-run\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655154 kubelet[2167]: I0212 20:28:06.653533 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-xtables-lock\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655154 kubelet[2167]: I0212 20:28:06.653606 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-ipsec-secrets\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655154 kubelet[2167]: I0212 20:28:06.653655 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d01556ba-75a0-4847-aa98-e43df474c2f0-hubble-tls\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655154 kubelet[2167]: I0212 20:28:06.653718 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-bpf-maps\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655154 kubelet[2167]: I0212 20:28:06.653793 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-config-path\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655514 kubelet[2167]: I0212 20:28:06.653801 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.655514 kubelet[2167]: I0212 20:28:06.653838 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-hostproc\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655514 kubelet[2167]: I0212 20:28:06.653924 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-etc-cni-netd\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655514 kubelet[2167]: I0212 20:28:06.653995 2167 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d01556ba-75a0-4847-aa98-e43df474c2f0-clustermesh-secrets\") pod \"d01556ba-75a0-4847-aa98-e43df474c2f0\" (UID: \"d01556ba-75a0-4847-aa98-e43df474c2f0\") " Feb 12 20:28:06.655514 kubelet[2167]: I0212 20:28:06.654078 2167 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-host-proc-sys-kernel\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.657838 kubelet[2167]: I0212 20:28:06.657305 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.657838 kubelet[2167]: I0212 20:28:06.657408 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.659096 kubelet[2167]: I0212 20:28:06.659052 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.659700 kubelet[2167]: W0212 20:28:06.659631 2167 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d01556ba-75a0-4847-aa98-e43df474c2f0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:28:06.662506 systemd[1]: var-lib-kubelet-pods-d01556ba\x2d75a0\x2d4847\x2daa98\x2de43df474c2f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmkmxq.mount: Deactivated successfully. Feb 12 20:28:06.666017 kubelet[2167]: I0212 20:28:06.663835 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-hostproc" (OuterVolumeSpecName: "hostproc") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.666325 kubelet[2167]: I0212 20:28:06.653855 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cni-path" (OuterVolumeSpecName: "cni-path") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.666487 kubelet[2167]: I0212 20:28:06.663898 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.667279 kubelet[2167]: I0212 20:28:06.667221 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.667536 kubelet[2167]: I0212 20:28:06.667495 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.667696 kubelet[2167]: I0212 20:28:06.667669 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:06.670050 kubelet[2167]: I0212 20:28:06.669983 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:28:06.670403 kubelet[2167]: I0212 20:28:06.670353 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01556ba-75a0-4847-aa98-e43df474c2f0-kube-api-access-mkmxq" (OuterVolumeSpecName: "kube-api-access-mkmxq") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "kube-api-access-mkmxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:28:06.677286 systemd[1]: var-lib-kubelet-pods-d01556ba\x2d75a0\x2d4847\x2daa98\x2de43df474c2f0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:28:06.683696 kubelet[2167]: I0212 20:28:06.683641 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:28:06.684034 systemd[1]: var-lib-kubelet-pods-d01556ba\x2d75a0\x2d4847\x2daa98\x2de43df474c2f0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:28:06.690342 kubelet[2167]: I0212 20:28:06.686315 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d01556ba-75a0-4847-aa98-e43df474c2f0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:28:06.689216 systemd[1]: var-lib-kubelet-pods-d01556ba\x2d75a0\x2d4847\x2daa98\x2de43df474c2f0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:28:06.691703 kubelet[2167]: I0212 20:28:06.691651 2167 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01556ba-75a0-4847-aa98-e43df474c2f0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d01556ba-75a0-4847-aa98-e43df474c2f0" (UID: "d01556ba-75a0-4847-aa98-e43df474c2f0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:28:06.754982 kubelet[2167]: I0212 20:28:06.754916 2167 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-cgroup\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.754982 kubelet[2167]: I0212 20:28:06.754980 2167 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-host-proc-sys-net\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755241 kubelet[2167]: I0212 20:28:06.755010 2167 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cni-path\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755241 kubelet[2167]: I0212 20:28:06.755040 2167 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-mkmxq\" (UniqueName: \"kubernetes.io/projected/d01556ba-75a0-4847-aa98-e43df474c2f0-kube-api-access-mkmxq\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755241 kubelet[2167]: I0212 20:28:06.755065 2167 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-ipsec-secrets\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755241 kubelet[2167]: I0212 20:28:06.755088 2167 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d01556ba-75a0-4847-aa98-e43df474c2f0-hubble-tls\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755241 kubelet[2167]: I0212 20:28:06.755112 2167 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-lib-modules\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755241 kubelet[2167]: I0212 20:28:06.755136 2167 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-run\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755241 kubelet[2167]: I0212 20:28:06.755159 2167 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-xtables-lock\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755241 kubelet[2167]: I0212 20:28:06.755181 2167 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-bpf-maps\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755706 kubelet[2167]: I0212 20:28:06.755203 2167 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d01556ba-75a0-4847-aa98-e43df474c2f0-cilium-config-path\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755706 kubelet[2167]: I0212 20:28:06.755225 2167 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-hostproc\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755706 kubelet[2167]: I0212 20:28:06.755247 2167 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d01556ba-75a0-4847-aa98-e43df474c2f0-etc-cni-netd\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:06.755706 kubelet[2167]: I0212 20:28:06.755270 2167 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d01556ba-75a0-4847-aa98-e43df474c2f0-clustermesh-secrets\") on node \"172.31.21.6\" DevicePath \"\"" Feb 12 20:28:07.209280 systemd[1]: Removed slice kubepods-burstable-podd01556ba_75a0_4847_aa98_e43df474c2f0.slice. Feb 12 20:28:07.264123 kubelet[2167]: I0212 20:28:07.264041 2167 setters.go:548] "Node became not ready" node="172.31.21.6" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:28:07.263921682 +0000 UTC m=+97.052354228 LastTransitionTime:2024-02-12 20:28:07.263921682 +0000 UTC m=+97.052354228 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:28:07.488940 kubelet[2167]: I0212 20:28:07.487823 2167 scope.go:115] "RemoveContainer" containerID="f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9" Feb 12 20:28:07.491183 env[1736]: time="2024-02-12T20:28:07.491120510Z" level=info msg="RemoveContainer for \"f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9\"" Feb 12 20:28:07.524769 kubelet[2167]: I0212 20:28:07.524704 2167 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:28:07.524996 kubelet[2167]: E0212 20:28:07.524812 2167 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d01556ba-75a0-4847-aa98-e43df474c2f0" containerName="mount-cgroup" Feb 12 20:28:07.524996 kubelet[2167]: I0212 20:28:07.524953 2167 memory_manager.go:346] "RemoveStaleState removing state" podUID="d01556ba-75a0-4847-aa98-e43df474c2f0" containerName="mount-cgroup" Feb 12 20:28:07.536254 systemd[1]: Created slice kubepods-burstable-podae11dd1f_f4ca_4271_8872_2fa0b50de72c.slice. Feb 12 20:28:07.561671 kubelet[2167]: I0212 20:28:07.561589 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-etc-cni-netd\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.561671 kubelet[2167]: I0212 20:28:07.561678 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-xtables-lock\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562014 kubelet[2167]: I0212 20:28:07.561731 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-bpf-maps\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562014 kubelet[2167]: I0212 20:28:07.561775 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-lib-modules\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562014 kubelet[2167]: I0212 20:28:07.561852 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-clustermesh-secrets\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562014 kubelet[2167]: I0212 20:28:07.561961 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-cilium-cgroup\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562300 kubelet[2167]: I0212 20:28:07.562049 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-hubble-tls\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562300 kubelet[2167]: I0212 20:28:07.562103 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-cilium-ipsec-secrets\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562300 kubelet[2167]: I0212 20:28:07.562152 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-host-proc-sys-kernel\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562300 kubelet[2167]: I0212 20:28:07.562213 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-cilium-run\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562300 kubelet[2167]: I0212 20:28:07.562255 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-hostproc\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562300 kubelet[2167]: I0212 20:28:07.562299 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-host-proc-sys-net\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562693 kubelet[2167]: I0212 20:28:07.562352 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcrmb\" (UniqueName: \"kubernetes.io/projected/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-kube-api-access-zcrmb\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562693 kubelet[2167]: I0212 20:28:07.562400 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-cni-path\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.562693 kubelet[2167]: I0212 20:28:07.562446 2167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae11dd1f-f4ca-4271-8872-2fa0b50de72c-cilium-config-path\") pod \"cilium-m5fjs\" (UID: \"ae11dd1f-f4ca-4271-8872-2fa0b50de72c\") " pod="kube-system/cilium-m5fjs" Feb 12 20:28:07.580808 kubelet[2167]: E0212 20:28:07.580758 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:07.658759 kubelet[2167]: E0212 20:28:07.658720 2167 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:28:07.762547 env[1736]: time="2024-02-12T20:28:07.762374574Z" level=info msg="RemoveContainer for \"f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9\" returns successfully" Feb 12 20:28:07.846736 env[1736]: time="2024-02-12T20:28:07.846666262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m5fjs,Uid:ae11dd1f-f4ca-4271-8872-2fa0b50de72c,Namespace:kube-system,Attempt:0,}" Feb 12 20:28:07.880286 env[1736]: time="2024-02-12T20:28:07.880122715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:28:07.880666 env[1736]: time="2024-02-12T20:28:07.880548227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:28:07.880955 env[1736]: time="2024-02-12T20:28:07.880848437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:28:07.881944 env[1736]: time="2024-02-12T20:28:07.881796731Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee pid=4202 runtime=io.containerd.runc.v2 Feb 12 20:28:07.912419 systemd[1]: Started cri-containerd-5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee.scope. Feb 12 20:28:07.979387 env[1736]: time="2024-02-12T20:28:07.979301578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m5fjs,Uid:ae11dd1f-f4ca-4271-8872-2fa0b50de72c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\"" Feb 12 20:28:07.984114 env[1736]: time="2024-02-12T20:28:07.984041198Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:28:08.046714 env[1736]: time="2024-02-12T20:28:08.046508914Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76445fea436d01bb44a3911abea106eb3f0f0c8dc994139724ca96fc0080594d\"" Feb 12 20:28:08.047896 env[1736]: time="2024-02-12T20:28:08.047815642Z" level=info msg="StartContainer for \"76445fea436d01bb44a3911abea106eb3f0f0c8dc994139724ca96fc0080594d\"" Feb 12 20:28:08.094848 systemd[1]: Started cri-containerd-76445fea436d01bb44a3911abea106eb3f0f0c8dc994139724ca96fc0080594d.scope. Feb 12 20:28:08.164122 env[1736]: time="2024-02-12T20:28:08.164035846Z" level=info msg="StartContainer for \"76445fea436d01bb44a3911abea106eb3f0f0c8dc994139724ca96fc0080594d\" returns successfully" Feb 12 20:28:08.181962 systemd[1]: cri-containerd-76445fea436d01bb44a3911abea106eb3f0f0c8dc994139724ca96fc0080594d.scope: Deactivated successfully. Feb 12 20:28:08.582251 kubelet[2167]: E0212 20:28:08.582176 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:08.805797 env[1736]: time="2024-02-12T20:28:08.805705219Z" level=info msg="shim disconnected" id=76445fea436d01bb44a3911abea106eb3f0f0c8dc994139724ca96fc0080594d Feb 12 20:28:08.805797 env[1736]: time="2024-02-12T20:28:08.805786901Z" level=warning msg="cleaning up after shim disconnected" id=76445fea436d01bb44a3911abea106eb3f0f0c8dc994139724ca96fc0080594d namespace=k8s.io Feb 12 20:28:08.806502 env[1736]: time="2024-02-12T20:28:08.805813013Z" level=info msg="cleaning up dead shim" Feb 12 20:28:08.829603 env[1736]: time="2024-02-12T20:28:08.829517000Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4286 runtime=io.containerd.runc.v2\n" Feb 12 20:28:08.901977 env[1736]: time="2024-02-12T20:28:08.901757711Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:28:08.905246 env[1736]: time="2024-02-12T20:28:08.905160021Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:28:08.909664 env[1736]: time="2024-02-12T20:28:08.909568502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:28:08.910294 env[1736]: time="2024-02-12T20:28:08.910210154Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 20:28:08.914747 env[1736]: time="2024-02-12T20:28:08.914655450Z" level=info msg="CreateContainer within sandbox \"dfbe6b2616348ee9acecc3b4c51d79eebc91497f89578cce95ad2593dd0cda5e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:28:08.941522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252101219.mount: Deactivated successfully. Feb 12 20:28:08.951561 env[1736]: time="2024-02-12T20:28:08.951457141Z" level=info msg="CreateContainer within sandbox \"dfbe6b2616348ee9acecc3b4c51d79eebc91497f89578cce95ad2593dd0cda5e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6c03703c29d10edcb96e693dd444429af5fea97ebb9f34d930dda8d12b32c098\"" Feb 12 20:28:08.952691 env[1736]: time="2024-02-12T20:28:08.952630912Z" level=info msg="StartContainer for \"6c03703c29d10edcb96e693dd444429af5fea97ebb9f34d930dda8d12b32c098\"" Feb 12 20:28:08.991819 systemd[1]: Started cri-containerd-6c03703c29d10edcb96e693dd444429af5fea97ebb9f34d930dda8d12b32c098.scope. Feb 12 20:28:09.051189 env[1736]: time="2024-02-12T20:28:09.051108351Z" level=info msg="StartContainer for \"6c03703c29d10edcb96e693dd444429af5fea97ebb9f34d930dda8d12b32c098\" returns successfully" Feb 12 20:28:09.199035 kubelet[2167]: I0212 20:28:09.198845 2167 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d01556ba-75a0-4847-aa98-e43df474c2f0 path="/var/lib/kubelet/pods/d01556ba-75a0-4847-aa98-e43df474c2f0/volumes" Feb 12 20:28:09.282331 kubelet[2167]: W0212 20:28:09.282261 2167 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01556ba_75a0_4847_aa98_e43df474c2f0.slice/cri-containerd-f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9.scope WatchSource:0}: container "f0693c74afb842c406ea9aa1915d260736f200b5eb8872f2b7472c121f0fe5c9" in namespace "k8s.io": not found Feb 12 20:28:09.520268 env[1736]: time="2024-02-12T20:28:09.520000638Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:28:09.525901 kubelet[2167]: I0212 20:28:09.525807 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-vpxvh" podStartSLOduration=-9.223372032329027e+09 pod.CreationTimestamp="2024-02-12 20:28:05 +0000 UTC" firstStartedPulling="2024-02-12 20:28:05.998444425 +0000 UTC m=+95.786876971" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:28:09.525309637 +0000 UTC m=+99.313742219" watchObservedRunningTime="2024-02-12 20:28:09.525749405 +0000 UTC m=+99.314181987" Feb 12 20:28:09.543417 env[1736]: time="2024-02-12T20:28:09.543352007Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9\"" Feb 12 20:28:09.544651 env[1736]: time="2024-02-12T20:28:09.544603814Z" level=info msg="StartContainer for \"1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9\"" Feb 12 20:28:09.573620 systemd[1]: Started cri-containerd-1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9.scope. Feb 12 20:28:09.588497 kubelet[2167]: E0212 20:28:09.588459 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:09.637200 env[1736]: time="2024-02-12T20:28:09.637135190Z" level=info msg="StartContainer for \"1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9\" returns successfully" Feb 12 20:28:09.660412 systemd[1]: cri-containerd-1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9.scope: Deactivated successfully. Feb 12 20:28:09.692911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9-rootfs.mount: Deactivated successfully. Feb 12 20:28:10.038028 env[1736]: time="2024-02-12T20:28:10.037960391Z" level=info msg="shim disconnected" id=1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9 Feb 12 20:28:10.039046 env[1736]: time="2024-02-12T20:28:10.038986219Z" level=warning msg="cleaning up after shim disconnected" id=1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9 namespace=k8s.io Feb 12 20:28:10.039258 env[1736]: time="2024-02-12T20:28:10.039222988Z" level=info msg="cleaning up dead shim" Feb 12 20:28:10.053490 env[1736]: time="2024-02-12T20:28:10.053429268Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4387 runtime=io.containerd.runc.v2\n" Feb 12 20:28:10.516677 env[1736]: time="2024-02-12T20:28:10.516603867Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:28:10.544137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount27519507.mount: Deactivated successfully. Feb 12 20:28:10.557801 env[1736]: time="2024-02-12T20:28:10.557711644Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f\"" Feb 12 20:28:10.558962 env[1736]: time="2024-02-12T20:28:10.558828827Z" level=info msg="StartContainer for \"8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f\"" Feb 12 20:28:10.590920 kubelet[2167]: E0212 20:28:10.589216 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:10.595956 systemd[1]: Started cri-containerd-8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f.scope. Feb 12 20:28:10.670693 env[1736]: time="2024-02-12T20:28:10.670627346Z" level=info msg="StartContainer for \"8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f\" returns successfully" Feb 12 20:28:10.672173 systemd[1]: cri-containerd-8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f.scope: Deactivated successfully. Feb 12 20:28:10.709708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f-rootfs.mount: Deactivated successfully. Feb 12 20:28:10.724715 env[1736]: time="2024-02-12T20:28:10.724643660Z" level=info msg="shim disconnected" id=8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f Feb 12 20:28:10.725040 env[1736]: time="2024-02-12T20:28:10.724715971Z" level=warning msg="cleaning up after shim disconnected" id=8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f namespace=k8s.io Feb 12 20:28:10.725040 env[1736]: time="2024-02-12T20:28:10.724739178Z" level=info msg="cleaning up dead shim" Feb 12 20:28:10.738044 env[1736]: time="2024-02-12T20:28:10.737965662Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4444 runtime=io.containerd.runc.v2\n" Feb 12 20:28:11.524614 env[1736]: time="2024-02-12T20:28:11.524519436Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:28:11.552279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341172494.mount: Deactivated successfully. Feb 12 20:28:11.565903 env[1736]: time="2024-02-12T20:28:11.565822094Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d\"" Feb 12 20:28:11.567259 env[1736]: time="2024-02-12T20:28:11.567203515Z" level=info msg="StartContainer for \"e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d\"" Feb 12 20:28:11.589807 kubelet[2167]: E0212 20:28:11.589739 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:11.599994 systemd[1]: Started cri-containerd-e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d.scope. Feb 12 20:28:11.651945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776878347.mount: Deactivated successfully. Feb 12 20:28:11.663702 systemd[1]: cri-containerd-e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d.scope: Deactivated successfully. Feb 12 20:28:11.668474 env[1736]: time="2024-02-12T20:28:11.667609619Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae11dd1f_f4ca_4271_8872_2fa0b50de72c.slice/cri-containerd-e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d.scope/memory.events\": no such file or directory" Feb 12 20:28:11.671182 env[1736]: time="2024-02-12T20:28:11.671093170Z" level=info msg="StartContainer for \"e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d\" returns successfully" Feb 12 20:28:11.705792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d-rootfs.mount: Deactivated successfully. Feb 12 20:28:11.719182 env[1736]: time="2024-02-12T20:28:11.719120677Z" level=info msg="shim disconnected" id=e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d Feb 12 20:28:11.719605 env[1736]: time="2024-02-12T20:28:11.719571558Z" level=warning msg="cleaning up after shim disconnected" id=e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d namespace=k8s.io Feb 12 20:28:11.719729 env[1736]: time="2024-02-12T20:28:11.719701108Z" level=info msg="cleaning up dead shim" Feb 12 20:28:11.733849 env[1736]: time="2024-02-12T20:28:11.733793444Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4502 runtime=io.containerd.runc.v2\n" Feb 12 20:28:12.399811 kubelet[2167]: W0212 20:28:12.399742 2167 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae11dd1f_f4ca_4271_8872_2fa0b50de72c.slice/cri-containerd-76445fea436d01bb44a3911abea106eb3f0f0c8dc994139724ca96fc0080594d.scope WatchSource:0}: task 76445fea436d01bb44a3911abea106eb3f0f0c8dc994139724ca96fc0080594d not found: not found Feb 12 20:28:12.496904 kubelet[2167]: E0212 20:28:12.496833 2167 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:12.529687 env[1736]: time="2024-02-12T20:28:12.529632064Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:28:12.590464 kubelet[2167]: E0212 20:28:12.590410 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:12.625086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1546994142.mount: Deactivated successfully. Feb 12 20:28:12.635931 env[1736]: time="2024-02-12T20:28:12.635825312Z" level=info msg="CreateContainer within sandbox \"5c35fb4f8e2666c2d9fa69ee550180577841e8091a35ed87870f7b290c32d1ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9\"" Feb 12 20:28:12.637597 env[1736]: time="2024-02-12T20:28:12.637546630Z" level=info msg="StartContainer for \"c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9\"" Feb 12 20:28:12.660955 kubelet[2167]: E0212 20:28:12.660720 2167 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:28:12.683020 systemd[1]: Started cri-containerd-c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9.scope. Feb 12 20:28:12.757193 env[1736]: time="2024-02-12T20:28:12.757125787Z" level=info msg="StartContainer for \"c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9\" returns successfully" Feb 12 20:28:12.790957 systemd[1]: run-containerd-runc-k8s.io-c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9-runc.mwkS6y.mount: Deactivated successfully. Feb 12 20:28:13.519916 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 20:28:13.590683 kubelet[2167]: E0212 20:28:13.590586 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:14.591888 kubelet[2167]: E0212 20:28:14.591790 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:14.996623 systemd[1]: run-containerd-runc-k8s.io-c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9-runc.T22MCn.mount: Deactivated successfully. Feb 12 20:28:15.516920 kubelet[2167]: W0212 20:28:15.514149 2167 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae11dd1f_f4ca_4271_8872_2fa0b50de72c.slice/cri-containerd-1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9.scope WatchSource:0}: task 1b91743ca1d3fff9a21127c3ff6ce7690d42882e1cc7bccd49bc64ed5cdf48b9 not found: not found Feb 12 20:28:15.592688 kubelet[2167]: E0212 20:28:15.592611 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:16.593936 kubelet[2167]: E0212 20:28:16.593821 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:17.337636 systemd[1]: run-containerd-runc-k8s.io-c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9-runc.YQWKFm.mount: Deactivated successfully. Feb 12 20:28:17.400089 systemd-networkd[1539]: lxc_health: Link UP Feb 12 20:28:17.410913 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:28:17.412140 systemd-networkd[1539]: lxc_health: Gained carrier Feb 12 20:28:17.413848 (udev-worker)[5060]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:28:17.594160 kubelet[2167]: E0212 20:28:17.593988 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:17.883316 kubelet[2167]: I0212 20:28:17.883092 2167 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-m5fjs" podStartSLOduration=10.883009208 pod.CreationTimestamp="2024-02-12 20:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:28:13.560370739 +0000 UTC m=+103.348803309" watchObservedRunningTime="2024-02-12 20:28:17.883009208 +0000 UTC m=+107.671441766" Feb 12 20:28:18.594430 kubelet[2167]: E0212 20:28:18.594359 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:18.632437 kubelet[2167]: W0212 20:28:18.632373 2167 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae11dd1f_f4ca_4271_8872_2fa0b50de72c.slice/cri-containerd-8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f.scope WatchSource:0}: task 8629768e3ecc1f2f9bcbe78ef00825dc57fe88bb67b8fb72d9908fd20025f59f not found: not found Feb 12 20:28:18.983539 systemd-networkd[1539]: lxc_health: Gained IPv6LL Feb 12 20:28:19.594967 kubelet[2167]: E0212 20:28:19.594903 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:19.737702 systemd[1]: run-containerd-runc-k8s.io-c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9-runc.gTsFnD.mount: Deactivated successfully. Feb 12 20:28:20.595378 kubelet[2167]: E0212 20:28:20.595311 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:21.595686 kubelet[2167]: E0212 20:28:21.595615 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:21.742822 kubelet[2167]: W0212 20:28:21.742740 2167 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae11dd1f_f4ca_4271_8872_2fa0b50de72c.slice/cri-containerd-e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d.scope WatchSource:0}: task e9884e5285d4cc4cecce82f445eb5826aaada0d5cf923a73461c38931d4e091d not found: not found Feb 12 20:28:22.141768 systemd[1]: run-containerd-runc-k8s.io-c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9-runc.DyZIGE.mount: Deactivated successfully. Feb 12 20:28:22.596623 kubelet[2167]: E0212 20:28:22.596538 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:23.597069 kubelet[2167]: E0212 20:28:23.596994 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:24.437684 systemd[1]: run-containerd-runc-k8s.io-c240415c6ee4aaebacc5dda388acf17bb3de8e0bceb2d0300efe81c489ae4fe9-runc.ODBNV7.mount: Deactivated successfully. Feb 12 20:28:24.598068 kubelet[2167]: E0212 20:28:24.597959 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:25.598248 kubelet[2167]: E0212 20:28:25.598175 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:26.599293 kubelet[2167]: E0212 20:28:26.599224 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:27.599410 kubelet[2167]: E0212 20:28:27.599334 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:28.600598 kubelet[2167]: E0212 20:28:28.600528 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:29.601388 kubelet[2167]: E0212 20:28:29.601340 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:30.602937 kubelet[2167]: E0212 20:28:30.602838 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:31.603482 kubelet[2167]: E0212 20:28:31.603401 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:32.497184 kubelet[2167]: E0212 20:28:32.497143 2167 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:32.517997 env[1736]: time="2024-02-12T20:28:32.517935692Z" level=info msg="StopPodSandbox for \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\"" Feb 12 20:28:32.518536 env[1736]: time="2024-02-12T20:28:32.518086833Z" level=info msg="TearDown network for sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" successfully" Feb 12 20:28:32.518536 env[1736]: time="2024-02-12T20:28:32.518176462Z" level=info msg="StopPodSandbox for \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" returns successfully" Feb 12 20:28:32.519645 env[1736]: time="2024-02-12T20:28:32.519546700Z" level=info msg="RemovePodSandbox for \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\"" Feb 12 20:28:32.519808 env[1736]: time="2024-02-12T20:28:32.519664493Z" level=info msg="Forcibly stopping sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\"" Feb 12 20:28:32.519918 env[1736]: time="2024-02-12T20:28:32.519846786Z" level=info msg="TearDown network for sandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" successfully" Feb 12 20:28:32.527906 env[1736]: time="2024-02-12T20:28:32.527790547Z" level=info msg="RemovePodSandbox \"6a202ac6ea10094d6f2cc7c2660d70d8739f8424cc51dac14a7285aab4efc7c9\" returns successfully" Feb 12 20:28:32.528595 env[1736]: time="2024-02-12T20:28:32.528534707Z" level=info msg="StopPodSandbox for \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\"" Feb 12 20:28:32.528784 env[1736]: time="2024-02-12T20:28:32.528710748Z" level=info msg="TearDown network for sandbox \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\" successfully" Feb 12 20:28:32.528908 env[1736]: time="2024-02-12T20:28:32.528774240Z" level=info msg="StopPodSandbox for \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\" returns successfully" Feb 12 20:28:32.529394 env[1736]: time="2024-02-12T20:28:32.529355727Z" level=info msg="RemovePodSandbox for \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\"" Feb 12 20:28:32.529577 env[1736]: time="2024-02-12T20:28:32.529518676Z" level=info msg="Forcibly stopping sandbox \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\"" Feb 12 20:28:32.529782 env[1736]: time="2024-02-12T20:28:32.529748369Z" level=info msg="TearDown network for sandbox \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\" successfully" Feb 12 20:28:32.534655 env[1736]: time="2024-02-12T20:28:32.534590740Z" level=info msg="RemovePodSandbox \"037176dad5d0f95f5da837f4b11663264dd2f8be33dde27d8f8561d0e19b90e2\" returns successfully" Feb 12 20:28:32.603787 kubelet[2167]: E0212 20:28:32.603750 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:33.605173 kubelet[2167]: E0212 20:28:33.605112 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:34.606270 kubelet[2167]: E0212 20:28:34.606213 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:35.606600 kubelet[2167]: E0212 20:28:35.606558 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:36.607834 kubelet[2167]: E0212 20:28:36.607770 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:37.608929 kubelet[2167]: E0212 20:28:37.608858 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:38.610722 kubelet[2167]: E0212 20:28:38.610647 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:39.325588 systemd[1]: cri-containerd-6c03703c29d10edcb96e693dd444429af5fea97ebb9f34d930dda8d12b32c098.scope: Deactivated successfully. Feb 12 20:28:39.360338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c03703c29d10edcb96e693dd444429af5fea97ebb9f34d930dda8d12b32c098-rootfs.mount: Deactivated successfully. Feb 12 20:28:39.423496 env[1736]: time="2024-02-12T20:28:39.423433878Z" level=info msg="shim disconnected" id=6c03703c29d10edcb96e693dd444429af5fea97ebb9f34d930dda8d12b32c098 Feb 12 20:28:39.424226 env[1736]: time="2024-02-12T20:28:39.424186416Z" level=warning msg="cleaning up after shim disconnected" id=6c03703c29d10edcb96e693dd444429af5fea97ebb9f34d930dda8d12b32c098 namespace=k8s.io Feb 12 20:28:39.424348 env[1736]: time="2024-02-12T20:28:39.424320554Z" level=info msg="cleaning up dead shim" Feb 12 20:28:39.438219 env[1736]: time="2024-02-12T20:28:39.438161415Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5190 runtime=io.containerd.runc.v2\n" Feb 12 20:28:39.611148 kubelet[2167]: E0212 20:28:39.610974 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:39.615125 kubelet[2167]: I0212 20:28:39.615063 2167 scope.go:115] "RemoveContainer" containerID="6c03703c29d10edcb96e693dd444429af5fea97ebb9f34d930dda8d12b32c098" Feb 12 20:28:39.618909 env[1736]: time="2024-02-12T20:28:39.618819864Z" level=info msg="CreateContainer within sandbox \"dfbe6b2616348ee9acecc3b4c51d79eebc91497f89578cce95ad2593dd0cda5e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Feb 12 20:28:39.639978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231751842.mount: Deactivated successfully. Feb 12 20:28:39.650567 env[1736]: time="2024-02-12T20:28:39.650485933Z" level=info msg="CreateContainer within sandbox \"dfbe6b2616348ee9acecc3b4c51d79eebc91497f89578cce95ad2593dd0cda5e\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"152d7ceeec02a83962e8a75b0c4fd7905c1708a18295f437e21862ad6d2a67de\"" Feb 12 20:28:39.651497 env[1736]: time="2024-02-12T20:28:39.651449433Z" level=info msg="StartContainer for \"152d7ceeec02a83962e8a75b0c4fd7905c1708a18295f437e21862ad6d2a67de\"" Feb 12 20:28:39.683141 systemd[1]: Started cri-containerd-152d7ceeec02a83962e8a75b0c4fd7905c1708a18295f437e21862ad6d2a67de.scope. Feb 12 20:28:39.749093 env[1736]: time="2024-02-12T20:28:39.749022323Z" level=info msg="StartContainer for \"152d7ceeec02a83962e8a75b0c4fd7905c1708a18295f437e21862ad6d2a67de\" returns successfully" Feb 12 20:28:40.611739 kubelet[2167]: E0212 20:28:40.611663 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:41.612694 kubelet[2167]: E0212 20:28:41.612613 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:42.613761 kubelet[2167]: E0212 20:28:42.613715 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:43.615423 kubelet[2167]: E0212 20:28:43.615349 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:44.616223 kubelet[2167]: E0212 20:28:44.616163 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:45.616624 kubelet[2167]: E0212 20:28:45.616518 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:46.617316 kubelet[2167]: E0212 20:28:46.617266 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:47.579574 kubelet[2167]: E0212 20:28:47.579510 2167 controller.go:189] failed to update lease, error: Put "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 20:28:47.618379 kubelet[2167]: E0212 20:28:47.618308 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:47.775895 kubelet[2167]: E0212 20:28:47.775805 2167 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T20:28:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T20:28:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T20:28:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T20:28:37Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":55608803},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22\\\",\\\"registry.k8s.io/kube-proxy:v1.26.13\\\"],\\\"sizeBytes\\\":21139040},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":253553}]}}\" for node \"172.31.21.6\": Patch \"https://172.31.18.59:6443/api/v1/nodes/172.31.21.6/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 20:28:48.619276 kubelet[2167]: E0212 20:28:48.619184 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:49.619854 kubelet[2167]: E0212 20:28:49.619807 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:50.620900 kubelet[2167]: E0212 20:28:50.620830 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:51.621940 kubelet[2167]: E0212 20:28:51.621896 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:52.497334 kubelet[2167]: E0212 20:28:52.497256 2167 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:52.623105 kubelet[2167]: E0212 20:28:52.623031 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:53.623457 kubelet[2167]: E0212 20:28:53.623409 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:54.625118 kubelet[2167]: E0212 20:28:54.625016 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:55.625274 kubelet[2167]: E0212 20:28:55.625225 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:56.627060 kubelet[2167]: E0212 20:28:56.626980 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:57.580271 kubelet[2167]: E0212 20:28:57.580211 2167 controller.go:189] failed to update lease, error: Put "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 20:28:57.628139 kubelet[2167]: E0212 20:28:57.628070 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:57.776257 kubelet[2167]: E0212 20:28:57.776179 2167 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.21.6\": Get \"https://172.31.18.59:6443/api/v1/nodes/172.31.21.6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 20:28:58.628627 kubelet[2167]: E0212 20:28:58.628559 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:59.629402 kubelet[2167]: E0212 20:28:59.629354 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:00.631117 kubelet[2167]: E0212 20:29:00.631067 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:01.632250 kubelet[2167]: E0212 20:29:01.632200 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:02.633759 kubelet[2167]: E0212 20:29:02.633688 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:03.633922 kubelet[2167]: E0212 20:29:03.633838 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:04.634892 kubelet[2167]: E0212 20:29:04.634821 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:05.629928 kubelet[2167]: E0212 20:29:05.627559 2167 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cilium-operator-f59cbd8c6-vpxvh.17b3378307e56775", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"cilium-operator-f59cbd8c6-vpxvh", UID:"d106aca1-151e-4bc4-811e-0e98dd31a290", APIVersion:"v1", ResourceVersion:"928", FieldPath:"spec.containers{cilium-operator}"}, Reason:"Pulled", Message:"Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.6"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 28, 39, 616268149, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 28, 39, 616268149, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.18.59:6443/api/v1/namespaces/kube-system/events": unexpected EOF'(may retry after sleeping) Feb 12 20:29:05.629928 kubelet[2167]: E0212 20:29:05.627757 2167 controller.go:189] failed to update lease, error: Put "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": unexpected EOF Feb 12 20:29:05.630769 kubelet[2167]: E0212 20:29:05.630704 2167 controller.go:189] failed to update lease, error: Put "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": dial tcp 172.31.18.59:6443: connect: connection refused Feb 12 20:29:05.631425 kubelet[2167]: E0212 20:29:05.631364 2167 controller.go:189] failed to update lease, error: Put "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": dial tcp 172.31.18.59:6443: connect: connection refused Feb 12 20:29:05.631425 kubelet[2167]: I0212 20:29:05.631415 2167 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 12 20:29:05.632085 kubelet[2167]: E0212 20:29:05.632036 2167 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": dial tcp 172.31.18.59:6443: connect: connection refused Feb 12 20:29:05.635273 kubelet[2167]: E0212 20:29:05.635210 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:05.833570 kubelet[2167]: E0212 20:29:05.833501 2167 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": dial tcp 172.31.18.59:6443: connect: connection refused Feb 12 20:29:06.234781 kubelet[2167]: E0212 20:29:06.234727 2167 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": dial tcp 172.31.18.59:6443: connect: connection refused Feb 12 20:29:06.628364 kubelet[2167]: E0212 20:29:06.628240 2167 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.21.6\": Get \"https://172.31.18.59:6443/api/v1/nodes/172.31.21.6?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Feb 12 20:29:06.629138 kubelet[2167]: E0212 20:29:06.629087 2167 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.21.6\": Get \"https://172.31.18.59:6443/api/v1/nodes/172.31.21.6?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" Feb 12 20:29:06.631840 kubelet[2167]: E0212 20:29:06.631790 2167 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.21.6\": Get \"https://172.31.18.59:6443/api/v1/nodes/172.31.21.6?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" Feb 12 20:29:06.631840 kubelet[2167]: E0212 20:29:06.631834 2167 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 12 20:29:06.631840 kubelet[2167]: I0212 20:29:06.631805 2167 status_manager.go:698] "Failed to get status for pod" podUID=d106aca1-151e-4bc4-811e-0e98dd31a290 pod="kube-system/cilium-operator-f59cbd8c6-vpxvh" err="Get \"https://172.31.18.59:6443/api/v1/namespaces/kube-system/pods/cilium-operator-f59cbd8c6-vpxvh\": dial tcp 172.31.18.59:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Feb 12 20:29:06.634325 kubelet[2167]: I0212 20:29:06.632375 2167 status_manager.go:698] "Failed to get status for pod" podUID=d106aca1-151e-4bc4-811e-0e98dd31a290 pod="kube-system/cilium-operator-f59cbd8c6-vpxvh" err="Get \"https://172.31.18.59:6443/api/v1/namespaces/kube-system/pods/cilium-operator-f59cbd8c6-vpxvh\": dial tcp 172.31.18.59:6443: connect: connection refused" Feb 12 20:29:06.634325 kubelet[2167]: I0212 20:29:06.633148 2167 status_manager.go:698] "Failed to get status for pod" podUID=d106aca1-151e-4bc4-811e-0e98dd31a290 pod="kube-system/cilium-operator-f59cbd8c6-vpxvh" err="Get \"https://172.31.18.59:6443/api/v1/namespaces/kube-system/pods/cilium-operator-f59cbd8c6-vpxvh\": dial tcp 172.31.18.59:6443: connect: connection refused" Feb 12 20:29:06.635530 kubelet[2167]: E0212 20:29:06.635491 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:07.637838 kubelet[2167]: E0212 20:29:07.637782 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:08.638154 kubelet[2167]: E0212 20:29:08.638103 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:09.639064 kubelet[2167]: E0212 20:29:09.638974 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:10.639401 kubelet[2167]: E0212 20:29:10.639348 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:11.640772 kubelet[2167]: E0212 20:29:11.640697 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:12.497218 kubelet[2167]: E0212 20:29:12.496974 2167 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:12.641424 kubelet[2167]: E0212 20:29:12.641377 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:13.643104 kubelet[2167]: E0212 20:29:13.643044 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:14.644003 kubelet[2167]: E0212 20:29:14.643951 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:15.645936 kubelet[2167]: E0212 20:29:15.645839 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:16.646349 kubelet[2167]: E0212 20:29:16.646301 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:17.036305 kubelet[2167]: E0212 20:29:17.036230 2167 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 12 20:29:17.647522 kubelet[2167]: E0212 20:29:17.647452 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:18.647674 kubelet[2167]: E0212 20:29:18.647629 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:19.648857 kubelet[2167]: E0212 20:29:19.648810 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:20.650201 kubelet[2167]: E0212 20:29:20.650099 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:21.650759 kubelet[2167]: E0212 20:29:21.650656 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:22.651886 kubelet[2167]: E0212 20:29:22.651791 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:23.652733 kubelet[2167]: E0212 20:29:23.652685 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:24.654024 kubelet[2167]: E0212 20:29:24.653976 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:25.655504 kubelet[2167]: E0212 20:29:25.655424 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:26.656314 kubelet[2167]: E0212 20:29:26.656262 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:26.983066 kubelet[2167]: E0212 20:29:26.983030 2167 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.21.6\": Get \"https://172.31.18.59:6443/api/v1/nodes/172.31.21.6?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 12 20:29:27.658109 kubelet[2167]: E0212 20:29:27.658036 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:28.637725 kubelet[2167]: E0212 20:29:28.637660 2167 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.6?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 12 20:29:28.658961 kubelet[2167]: E0212 20:29:28.658902 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:29.659143 kubelet[2167]: E0212 20:29:29.659045 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:30.659841 kubelet[2167]: E0212 20:29:30.659771 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:29:31.660721 kubelet[2167]: E0212 20:29:31.660669 2167 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"