Feb 12 20:24:58.956367 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 12 20:24:58.956417 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 20:24:58.956443 kernel: efi: EFI v2.70 by EDK II Feb 12 20:24:58.956459 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 12 20:24:58.956473 kernel: ACPI: Early table checksum verification disabled Feb 12 20:24:58.956486 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 12 20:24:58.956502 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 12 20:24:58.956516 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 12 20:24:58.956530 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 12 20:24:58.956544 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 12 20:24:58.956562 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 12 20:24:58.956576 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 12 20:24:58.956590 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 12 20:24:58.956604 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 12 20:24:58.956620 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 12 20:24:58.956640 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 12 20:24:58.956654 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 12 20:24:58.956669 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 12 20:24:58.956683 kernel: printk: bootconsole [uart0] enabled Feb 12 20:24:58.956697 kernel: NUMA: Failed to initialise from firmware Feb 12 20:24:58.956712 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 12 20:24:58.956726 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 12 20:24:58.956741 kernel: Zone ranges: Feb 12 20:24:58.956755 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 12 20:24:58.956770 kernel: DMA32 empty Feb 12 20:24:58.956784 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 12 20:24:58.956802 kernel: Movable zone start for each node Feb 12 20:24:58.956817 kernel: Early memory node ranges Feb 12 20:24:58.956831 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 12 20:24:58.956846 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 12 20:24:58.956860 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 12 20:24:58.956875 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 12 20:24:58.956889 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 12 20:24:58.956904 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 12 20:24:58.956918 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 12 20:24:58.956932 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 12 20:24:58.956947 kernel: psci: probing for conduit method from ACPI. Feb 12 20:24:58.956961 kernel: psci: PSCIv1.0 detected in firmware. Feb 12 20:24:58.956979 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 20:24:58.956994 kernel: psci: Trusted OS migration not required Feb 12 20:24:58.957015 kernel: psci: SMC Calling Convention v1.1 Feb 12 20:24:58.957031 kernel: ACPI: SRAT not present Feb 12 20:24:58.957046 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 20:24:58.957066 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 20:24:58.957081 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 12 20:24:58.957097 kernel: Detected PIPT I-cache on CPU0 Feb 12 20:24:58.957112 kernel: CPU features: detected: GIC system register CPU interface Feb 12 20:24:58.957127 kernel: CPU features: detected: Spectre-v2 Feb 12 20:24:58.957142 kernel: CPU features: detected: Spectre-v3a Feb 12 20:24:58.957157 kernel: CPU features: detected: Spectre-BHB Feb 12 20:24:58.957172 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 20:24:58.957188 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 20:24:58.957203 kernel: CPU features: detected: ARM erratum 1742098 Feb 12 20:24:58.957218 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 12 20:24:58.957237 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 12 20:24:58.957253 kernel: Policy zone: Normal Feb 12 20:24:58.957271 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 20:24:58.957287 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:24:58.957322 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:24:58.957339 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:24:58.957355 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:24:58.957371 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 12 20:24:58.957387 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 12 20:24:58.957403 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:24:58.957423 kernel: trace event string verifier disabled Feb 12 20:24:58.957438 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 20:24:58.957454 kernel: rcu: RCU event tracing is enabled. Feb 12 20:24:58.957470 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:24:58.957486 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 20:24:58.957501 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:24:58.957517 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:24:58.957532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:24:58.957547 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 20:24:58.957563 kernel: GICv3: 96 SPIs implemented Feb 12 20:24:58.957577 kernel: GICv3: 0 Extended SPIs implemented Feb 12 20:24:58.957593 kernel: GICv3: Distributor has no Range Selector support Feb 12 20:24:58.957612 kernel: Root IRQ handler: gic_handle_irq Feb 12 20:24:58.957627 kernel: GICv3: 16 PPIs implemented Feb 12 20:24:58.957642 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 12 20:24:58.957657 kernel: ACPI: SRAT not present Feb 12 20:24:58.957672 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 12 20:24:58.957687 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 20:24:58.957702 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 12 20:24:58.957718 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 12 20:24:58.957733 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 12 20:24:58.957748 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 12 20:24:58.957764 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 12 20:24:58.957783 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 12 20:24:58.957799 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 12 20:24:58.957815 kernel: Console: colour dummy device 80x25 Feb 12 20:24:58.957849 kernel: printk: console [tty1] enabled Feb 12 20:24:58.957868 kernel: ACPI: Core revision 20210730 Feb 12 20:24:58.957884 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 12 20:24:58.957900 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:24:58.957916 kernel: LSM: Security Framework initializing Feb 12 20:24:58.957932 kernel: SELinux: Initializing. Feb 12 20:24:58.957948 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:24:58.957969 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:24:58.957984 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:24:58.958000 kernel: Platform MSI: ITS@0x10080000 domain created Feb 12 20:24:58.958015 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 12 20:24:58.958031 kernel: Remapping and enabling EFI services. Feb 12 20:24:58.958046 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:24:58.958062 kernel: Detected PIPT I-cache on CPU1 Feb 12 20:24:58.958077 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 12 20:24:58.958093 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 12 20:24:58.958113 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 12 20:24:58.958129 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:24:58.958144 kernel: SMP: Total of 2 processors activated. Feb 12 20:24:58.958159 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 20:24:58.958175 kernel: CPU features: detected: 32-bit EL1 Support Feb 12 20:24:58.958190 kernel: CPU features: detected: CRC32 instructions Feb 12 20:24:58.958206 kernel: CPU: All CPU(s) started at EL1 Feb 12 20:24:58.958221 kernel: alternatives: patching kernel code Feb 12 20:24:58.958237 kernel: devtmpfs: initialized Feb 12 20:24:58.958256 kernel: KASLR disabled due to lack of seed Feb 12 20:24:58.958272 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:24:58.958288 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:24:58.958333 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:24:58.958354 kernel: SMBIOS 3.0.0 present. Feb 12 20:24:58.958370 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 12 20:24:58.958386 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:24:58.958402 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 20:24:58.958419 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 20:24:58.958435 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 20:24:58.958451 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:24:58.958468 kernel: audit: type=2000 audit(0.249:1): state=initialized audit_enabled=0 res=1 Feb 12 20:24:58.958488 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:24:58.958504 kernel: cpuidle: using governor menu Feb 12 20:24:58.958521 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 20:24:58.958537 kernel: ASID allocator initialised with 32768 entries Feb 12 20:24:58.958553 kernel: ACPI: bus type PCI registered Feb 12 20:24:58.958573 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:24:58.958589 kernel: Serial: AMBA PL011 UART driver Feb 12 20:24:58.958606 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:24:58.958622 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 20:24:58.958638 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:24:58.958655 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 20:24:58.958671 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:24:58.958687 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 20:24:58.958703 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:24:58.958724 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:24:58.958740 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:24:58.958756 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:24:58.958772 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:24:58.958789 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:24:58.958805 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:24:58.958821 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:24:58.958838 kernel: ACPI: Interpreter enabled Feb 12 20:24:58.958854 kernel: ACPI: Using GIC for interrupt routing Feb 12 20:24:58.958874 kernel: ACPI: MCFG table detected, 1 entries Feb 12 20:24:58.958890 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 12 20:24:58.959188 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:24:58.959430 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 20:24:58.959631 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 20:24:58.959836 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 12 20:24:58.960033 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 12 20:24:58.960063 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 12 20:24:58.960081 kernel: acpiphp: Slot [1] registered Feb 12 20:24:58.960097 kernel: acpiphp: Slot [2] registered Feb 12 20:24:58.960113 kernel: acpiphp: Slot [3] registered Feb 12 20:24:58.960130 kernel: acpiphp: Slot [4] registered Feb 12 20:24:58.960146 kernel: acpiphp: Slot [5] registered Feb 12 20:24:58.960162 kernel: acpiphp: Slot [6] registered Feb 12 20:24:58.960178 kernel: acpiphp: Slot [7] registered Feb 12 20:24:58.960194 kernel: acpiphp: Slot [8] registered Feb 12 20:24:58.960214 kernel: acpiphp: Slot [9] registered Feb 12 20:24:58.960231 kernel: acpiphp: Slot [10] registered Feb 12 20:24:58.960247 kernel: acpiphp: Slot [11] registered Feb 12 20:24:58.960263 kernel: acpiphp: Slot [12] registered Feb 12 20:24:58.960279 kernel: acpiphp: Slot [13] registered Feb 12 20:24:58.960318 kernel: acpiphp: Slot [14] registered Feb 12 20:24:58.960338 kernel: acpiphp: Slot [15] registered Feb 12 20:24:58.960354 kernel: acpiphp: Slot [16] registered Feb 12 20:24:58.960370 kernel: acpiphp: Slot [17] registered Feb 12 20:24:58.960386 kernel: acpiphp: Slot [18] registered Feb 12 20:24:58.960407 kernel: acpiphp: Slot [19] registered Feb 12 20:24:58.960423 kernel: acpiphp: Slot [20] registered Feb 12 20:24:58.960439 kernel: acpiphp: Slot [21] registered Feb 12 20:24:58.960456 kernel: acpiphp: Slot [22] registered Feb 12 20:24:58.960472 kernel: acpiphp: Slot [23] registered Feb 12 20:24:58.960489 kernel: acpiphp: Slot [24] registered Feb 12 20:24:58.960505 kernel: acpiphp: Slot [25] registered Feb 12 20:24:58.960521 kernel: acpiphp: Slot [26] registered Feb 12 20:24:58.960537 kernel: acpiphp: Slot [27] registered Feb 12 20:24:58.960557 kernel: acpiphp: Slot [28] registered Feb 12 20:24:58.960573 kernel: acpiphp: Slot [29] registered Feb 12 20:24:58.960589 kernel: acpiphp: Slot [30] registered Feb 12 20:24:58.960605 kernel: acpiphp: Slot [31] registered Feb 12 20:24:58.960621 kernel: PCI host bridge to bus 0000:00 Feb 12 20:24:58.960825 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 12 20:24:58.961010 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 20:24:58.961191 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 12 20:24:58.966550 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 12 20:24:58.966808 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 12 20:24:58.967030 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 12 20:24:58.967237 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 12 20:24:58.967486 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 12 20:24:58.967691 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 12 20:24:58.967902 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 20:24:58.968124 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 12 20:24:58.970407 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 12 20:24:58.970646 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 12 20:24:58.970853 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 12 20:24:58.971057 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 20:24:58.971261 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 12 20:24:58.974540 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 12 20:24:58.974757 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 12 20:24:58.974956 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 12 20:24:58.975159 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 12 20:24:58.975389 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 12 20:24:58.975575 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 20:24:58.975753 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 12 20:24:58.975785 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 20:24:58.975803 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 20:24:58.975821 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 20:24:58.975837 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 20:24:58.975854 kernel: iommu: Default domain type: Translated Feb 12 20:24:58.975870 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 20:24:58.975886 kernel: vgaarb: loaded Feb 12 20:24:58.975903 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:24:58.975919 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:24:58.975940 kernel: PTP clock support registered Feb 12 20:24:58.975957 kernel: Registered efivars operations Feb 12 20:24:58.975973 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 20:24:58.975989 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:24:58.976006 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:24:58.976022 kernel: pnp: PnP ACPI init Feb 12 20:24:58.976235 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 12 20:24:58.976260 kernel: pnp: PnP ACPI: found 1 devices Feb 12 20:24:58.976276 kernel: NET: Registered PF_INET protocol family Feb 12 20:24:58.976317 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:24:58.976336 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:24:58.976353 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:24:58.976370 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:24:58.976387 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:24:58.976403 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:24:58.976420 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:24:58.976436 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:24:58.976453 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:24:58.976474 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:24:58.976491 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 12 20:24:58.976507 kernel: kvm [1]: HYP mode not available Feb 12 20:24:58.976523 kernel: Initialise system trusted keyrings Feb 12 20:24:58.976540 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:24:58.976556 kernel: Key type asymmetric registered Feb 12 20:24:58.976573 kernel: Asymmetric key parser 'x509' registered Feb 12 20:24:58.976589 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:24:58.976605 kernel: io scheduler mq-deadline registered Feb 12 20:24:58.976626 kernel: io scheduler kyber registered Feb 12 20:24:58.976642 kernel: io scheduler bfq registered Feb 12 20:24:58.976860 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 12 20:24:58.976885 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 20:24:58.976902 kernel: ACPI: button: Power Button [PWRB] Feb 12 20:24:58.976919 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:24:58.976936 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 12 20:24:58.977136 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 12 20:24:58.977163 kernel: printk: console [ttyS0] disabled Feb 12 20:24:58.977181 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 12 20:24:58.977198 kernel: printk: console [ttyS0] enabled Feb 12 20:24:58.977214 kernel: printk: bootconsole [uart0] disabled Feb 12 20:24:58.977230 kernel: thunder_xcv, ver 1.0 Feb 12 20:24:58.977246 kernel: thunder_bgx, ver 1.0 Feb 12 20:24:58.977263 kernel: nicpf, ver 1.0 Feb 12 20:24:58.977279 kernel: nicvf, ver 1.0 Feb 12 20:24:58.977507 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 20:24:58.977699 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T20:24:58 UTC (1707769498) Feb 12 20:24:58.977723 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 20:24:58.977739 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:24:58.977755 kernel: Segment Routing with IPv6 Feb 12 20:24:58.977772 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:24:58.977788 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:24:58.977804 kernel: Key type dns_resolver registered Feb 12 20:24:58.977820 kernel: registered taskstats version 1 Feb 12 20:24:58.977861 kernel: Loading compiled-in X.509 certificates Feb 12 20:24:58.977879 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 20:24:58.977896 kernel: Key type .fscrypt registered Feb 12 20:24:58.977912 kernel: Key type fscrypt-provisioning registered Feb 12 20:24:58.977928 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:24:58.977944 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:24:58.977960 kernel: ima: No architecture policies found Feb 12 20:24:58.977976 kernel: Freeing unused kernel memory: 34688K Feb 12 20:24:58.977992 kernel: Run /init as init process Feb 12 20:24:58.978012 kernel: with arguments: Feb 12 20:24:58.978029 kernel: /init Feb 12 20:24:58.978045 kernel: with environment: Feb 12 20:24:58.978061 kernel: HOME=/ Feb 12 20:24:58.978077 kernel: TERM=linux Feb 12 20:24:58.978093 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:24:58.978115 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:24:58.978135 systemd[1]: Detected virtualization amazon. Feb 12 20:24:58.978158 systemd[1]: Detected architecture arm64. Feb 12 20:24:58.978175 systemd[1]: Running in initrd. Feb 12 20:24:58.978192 systemd[1]: No hostname configured, using default hostname. Feb 12 20:24:58.978209 systemd[1]: Hostname set to . Feb 12 20:24:58.978227 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:24:58.978245 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:24:58.978262 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:24:58.978279 systemd[1]: Reached target cryptsetup.target. Feb 12 20:24:58.978318 systemd[1]: Reached target paths.target. Feb 12 20:24:58.978338 systemd[1]: Reached target slices.target. Feb 12 20:24:58.978355 systemd[1]: Reached target swap.target. Feb 12 20:24:58.978373 systemd[1]: Reached target timers.target. Feb 12 20:24:58.978391 systemd[1]: Listening on iscsid.socket. Feb 12 20:24:58.978408 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:24:58.978426 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:24:58.978444 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:24:58.978467 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:24:58.978484 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:24:58.978502 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:24:58.978520 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:24:58.978537 systemd[1]: Reached target sockets.target. Feb 12 20:24:58.978554 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:24:58.978572 systemd[1]: Finished network-cleanup.service. Feb 12 20:24:58.978589 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:24:58.978607 systemd[1]: Starting systemd-journald.service... Feb 12 20:24:58.978629 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:24:58.978646 systemd[1]: Starting systemd-resolved.service... Feb 12 20:24:58.978664 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:24:58.978681 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:24:58.978702 systemd-journald[308]: Journal started Feb 12 20:24:58.978797 systemd-journald[308]: Runtime Journal (/run/log/journal/ec22fd32eb0d445ea42f64496bd2b690) is 8.0M, max 75.4M, 67.4M free. Feb 12 20:24:58.955427 systemd-modules-load[309]: Inserted module 'overlay' Feb 12 20:24:58.990433 kernel: audit: type=1130 audit(1707769498.982:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:58.990468 systemd[1]: Started systemd-journald.service. Feb 12 20:24:58.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.002434 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:24:59.007760 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 12 20:24:59.011734 kernel: Bridge firewalling registered Feb 12 20:24:59.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.012521 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:24:59.026208 kernel: audit: type=1130 audit(1707769499.010:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.026587 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:24:59.041340 kernel: audit: type=1130 audit(1707769499.025:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.056331 kernel: audit: type=1130 audit(1707769499.040:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.056398 kernel: SCSI subsystem initialized Feb 12 20:24:59.059642 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:24:59.068090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:24:59.087324 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:24:59.087400 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:24:59.097608 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:24:59.102712 systemd-resolved[310]: Positive Trust Anchors: Feb 12 20:24:59.102738 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:24:59.102794 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:24:59.109996 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 12 20:24:59.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.110310 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:24:59.149652 kernel: audit: type=1130 audit(1707769499.115:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.149689 kernel: audit: type=1130 audit(1707769499.124:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.149715 kernel: audit: type=1130 audit(1707769499.133:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.116632 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:24:59.125664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:24:59.136275 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:24:59.146272 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:24:59.179671 dracut-cmdline[330]: dracut-dracut-053 Feb 12 20:24:59.182447 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:24:59.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.192994 dracut-cmdline[330]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 20:24:59.204360 kernel: audit: type=1130 audit(1707769499.183:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.307327 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:24:59.321329 kernel: iscsi: registered transport (tcp) Feb 12 20:24:59.345701 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:24:59.345781 kernel: QLogic iSCSI HBA Driver Feb 12 20:24:59.585230 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 12 20:24:59.587126 kernel: random: crng init done Feb 12 20:24:59.588591 systemd[1]: Started systemd-resolved.service. Feb 12 20:24:59.600849 kernel: audit: type=1130 audit(1707769499.589:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.590486 systemd[1]: Reached target nss-lookup.target. Feb 12 20:24:59.617046 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:24:59.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.621716 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:24:59.687340 kernel: raid6: neonx8 gen() 6445 MB/s Feb 12 20:24:59.705326 kernel: raid6: neonx8 xor() 4677 MB/s Feb 12 20:24:59.723327 kernel: raid6: neonx4 gen() 6632 MB/s Feb 12 20:24:59.741325 kernel: raid6: neonx4 xor() 4850 MB/s Feb 12 20:24:59.759324 kernel: raid6: neonx2 gen() 5843 MB/s Feb 12 20:24:59.777326 kernel: raid6: neonx2 xor() 4443 MB/s Feb 12 20:24:59.795325 kernel: raid6: neonx1 gen() 4529 MB/s Feb 12 20:24:59.813325 kernel: raid6: neonx1 xor() 3659 MB/s Feb 12 20:24:59.831325 kernel: raid6: int64x8 gen() 3442 MB/s Feb 12 20:24:59.849325 kernel: raid6: int64x8 xor() 2091 MB/s Feb 12 20:24:59.867326 kernel: raid6: int64x4 gen() 3856 MB/s Feb 12 20:24:59.885327 kernel: raid6: int64x4 xor() 2200 MB/s Feb 12 20:24:59.903326 kernel: raid6: int64x2 gen() 3625 MB/s Feb 12 20:24:59.921326 kernel: raid6: int64x2 xor() 1954 MB/s Feb 12 20:24:59.939325 kernel: raid6: int64x1 gen() 2777 MB/s Feb 12 20:24:59.958816 kernel: raid6: int64x1 xor() 1454 MB/s Feb 12 20:24:59.958847 kernel: raid6: using algorithm neonx4 gen() 6632 MB/s Feb 12 20:24:59.958871 kernel: raid6: .... xor() 4850 MB/s, rmw enabled Feb 12 20:24:59.960611 kernel: raid6: using neon recovery algorithm Feb 12 20:24:59.979333 kernel: xor: measuring software checksum speed Feb 12 20:24:59.981322 kernel: 8regs : 9343 MB/sec Feb 12 20:24:59.984325 kernel: 32regs : 11107 MB/sec Feb 12 20:24:59.988244 kernel: arm64_neon : 9615 MB/sec Feb 12 20:24:59.988276 kernel: xor: using function: 32regs (11107 MB/sec) Feb 12 20:25:00.078347 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 20:25:00.095369 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:25:00.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:00.097000 audit: BPF prog-id=7 op=LOAD Feb 12 20:25:00.100055 systemd[1]: Starting systemd-udevd.service... Feb 12 20:25:00.098000 audit: BPF prog-id=8 op=LOAD Feb 12 20:25:00.125359 systemd-udevd[509]: Using default interface naming scheme 'v252'. Feb 12 20:25:00.136106 systemd[1]: Started systemd-udevd.service. Feb 12 20:25:00.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:00.144280 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:25:00.172567 dracut-pre-trigger[522]: rd.md=0: removing MD RAID activation Feb 12 20:25:00.233140 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:25:00.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:00.237862 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:25:00.342326 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:25:00.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:00.457752 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 20:25:00.457816 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 12 20:25:00.465560 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 12 20:25:00.465876 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 12 20:25:00.480327 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:44:bf:82:f0:43 Feb 12 20:25:00.480595 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 12 20:25:00.483061 (udev-worker)[565]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:00.486128 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 12 20:25:00.495337 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 12 20:25:00.501422 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:25:00.501476 kernel: GPT:9289727 != 16777215 Feb 12 20:25:00.501500 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:25:00.503618 kernel: GPT:9289727 != 16777215 Feb 12 20:25:00.504898 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:25:00.508311 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:25:00.588341 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (572) Feb 12 20:25:00.605742 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:25:00.667453 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:25:00.697193 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:25:00.702951 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:25:00.725534 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:25:00.739172 systemd[1]: Starting disk-uuid.service... Feb 12 20:25:00.760761 disk-uuid[670]: Primary Header is updated. Feb 12 20:25:00.760761 disk-uuid[670]: Secondary Entries is updated. Feb 12 20:25:00.760761 disk-uuid[670]: Secondary Header is updated. Feb 12 20:25:00.771342 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:25:00.782325 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:25:00.791326 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:25:01.787337 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:25:01.788046 disk-uuid[671]: The operation has completed successfully. Feb 12 20:25:01.953873 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:25:01.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:01.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:01.954072 systemd[1]: Finished disk-uuid.service. Feb 12 20:25:01.970227 systemd[1]: Starting verity-setup.service... Feb 12 20:25:02.007326 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 20:25:02.090752 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:25:02.096466 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:25:02.102476 systemd[1]: Finished verity-setup.service. Feb 12 20:25:02.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.185344 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:25:02.186527 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:25:02.187381 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:25:02.188766 systemd[1]: Starting ignition-setup.service... Feb 12 20:25:02.200917 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:25:02.223179 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:25:02.223247 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:25:02.223272 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:25:02.233338 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:25:02.250487 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:25:02.293062 systemd[1]: Finished ignition-setup.service. Feb 12 20:25:02.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.297620 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:25:02.341846 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:25:02.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.345000 audit: BPF prog-id=9 op=LOAD Feb 12 20:25:02.347444 systemd[1]: Starting systemd-networkd.service... Feb 12 20:25:02.393745 systemd-networkd[1183]: lo: Link UP Feb 12 20:25:02.393768 systemd-networkd[1183]: lo: Gained carrier Feb 12 20:25:02.395517 systemd-networkd[1183]: Enumeration completed Feb 12 20:25:02.397025 systemd[1]: Started systemd-networkd.service. Feb 12 20:25:02.398967 systemd-networkd[1183]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:25:02.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.404092 systemd[1]: Reached target network.target. Feb 12 20:25:02.410369 systemd-networkd[1183]: eth0: Link UP Feb 12 20:25:02.410385 systemd-networkd[1183]: eth0: Gained carrier Feb 12 20:25:02.411444 systemd[1]: Starting iscsiuio.service... Feb 12 20:25:02.426460 systemd[1]: Started iscsiuio.service. Feb 12 20:25:02.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.442155 systemd[1]: Starting iscsid.service... Feb 12 20:25:02.449500 systemd-networkd[1183]: eth0: DHCPv4 address 172.31.25.148/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 20:25:02.454724 iscsid[1188]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:25:02.454724 iscsid[1188]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:25:02.454724 iscsid[1188]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:25:02.454724 iscsid[1188]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:25:02.454724 iscsid[1188]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:25:02.454724 iscsid[1188]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:25:02.480689 systemd[1]: Started iscsid.service. Feb 12 20:25:02.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.486064 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:25:02.509016 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:25:02.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.512657 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:25:02.515828 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:25:02.517639 systemd[1]: Reached target remote-fs.target. Feb 12 20:25:02.523714 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:25:02.539391 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:25:02.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.967352 ignition[1149]: Ignition 2.14.0 Feb 12 20:25:02.967429 ignition[1149]: Stage: fetch-offline Feb 12 20:25:02.967736 ignition[1149]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:02.967797 ignition[1149]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:25:02.988506 ignition[1149]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:25:02.990986 ignition[1149]: Ignition finished successfully Feb 12 20:25:02.994197 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:25:03.009609 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 12 20:25:03.009673 kernel: audit: type=1130 audit(1707769502.994:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:02.998526 systemd[1]: Starting ignition-fetch.service... Feb 12 20:25:03.022619 ignition[1207]: Ignition 2.14.0 Feb 12 20:25:03.022648 ignition[1207]: Stage: fetch Feb 12 20:25:03.022939 ignition[1207]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:03.022998 ignition[1207]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:25:03.036096 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:25:03.038437 ignition[1207]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:25:03.045694 ignition[1207]: INFO : PUT result: OK Feb 12 20:25:03.048729 ignition[1207]: DEBUG : parsed url from cmdline: "" Feb 12 20:25:03.048729 ignition[1207]: INFO : no config URL provided Feb 12 20:25:03.048729 ignition[1207]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:25:03.054793 ignition[1207]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 12 20:25:03.054793 ignition[1207]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:25:03.054793 ignition[1207]: INFO : PUT result: OK Feb 12 20:25:03.054793 ignition[1207]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 12 20:25:03.063111 ignition[1207]: INFO : GET result: OK Feb 12 20:25:03.064582 ignition[1207]: DEBUG : parsing config with SHA512: d08895d324c56dc103668dea4f228c3254184282c5feb61a1ff213d55ae56958dd3d50f5d84a3a412ab79bd435a49b65ee66fb11ecfd17cc1b5628face67ce30 Feb 12 20:25:03.105078 unknown[1207]: fetched base config from "system" Feb 12 20:25:03.105344 unknown[1207]: fetched base config from "system" Feb 12 20:25:03.106533 ignition[1207]: fetch: fetch complete Feb 12 20:25:03.105360 unknown[1207]: fetched user config from "aws" Feb 12 20:25:03.106547 ignition[1207]: fetch: fetch passed Feb 12 20:25:03.106633 ignition[1207]: Ignition finished successfully Feb 12 20:25:03.117263 systemd[1]: Finished ignition-fetch.service. Feb 12 20:25:03.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.124437 systemd[1]: Starting ignition-kargs.service... Feb 12 20:25:03.133948 kernel: audit: type=1130 audit(1707769503.121:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.147209 ignition[1213]: Ignition 2.14.0 Feb 12 20:25:03.147238 ignition[1213]: Stage: kargs Feb 12 20:25:03.147559 ignition[1213]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:03.147617 ignition[1213]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:25:03.162347 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:25:03.164682 ignition[1213]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:25:03.168065 ignition[1213]: INFO : PUT result: OK Feb 12 20:25:03.172680 ignition[1213]: kargs: kargs passed Feb 12 20:25:03.172773 ignition[1213]: Ignition finished successfully Feb 12 20:25:03.176345 systemd[1]: Finished ignition-kargs.service. Feb 12 20:25:03.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.189077 kernel: audit: type=1130 audit(1707769503.176:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.187314 systemd[1]: Starting ignition-disks.service... Feb 12 20:25:03.202041 ignition[1219]: Ignition 2.14.0 Feb 12 20:25:03.202070 ignition[1219]: Stage: disks Feb 12 20:25:03.202378 ignition[1219]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:03.202432 ignition[1219]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:25:03.215566 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:25:03.218156 ignition[1219]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:25:03.221353 ignition[1219]: INFO : PUT result: OK Feb 12 20:25:03.226215 ignition[1219]: disks: disks passed Feb 12 20:25:03.226356 ignition[1219]: Ignition finished successfully Feb 12 20:25:03.230759 systemd[1]: Finished ignition-disks.service. Feb 12 20:25:03.247585 kernel: audit: type=1130 audit(1707769503.231:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.232694 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:25:03.234486 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:25:03.236187 systemd[1]: Reached target local-fs.target. Feb 12 20:25:03.245434 systemd[1]: Reached target sysinit.target. Feb 12 20:25:03.248419 systemd[1]: Reached target basic.target. Feb 12 20:25:03.258727 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:25:03.303400 systemd-fsck[1227]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 20:25:03.316076 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:25:03.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.320659 systemd[1]: Mounting sysroot.mount... Feb 12 20:25:03.330437 kernel: audit: type=1130 audit(1707769503.316:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.339363 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:25:03.340019 systemd[1]: Mounted sysroot.mount. Feb 12 20:25:03.340734 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:25:03.351546 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:25:03.361520 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:25:03.361599 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:25:03.361655 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:25:03.367615 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:25:03.374250 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:25:03.383759 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:25:03.402334 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1244) Feb 12 20:25:03.408618 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:25:03.408676 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:25:03.410843 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:25:03.413749 initrd-setup-root[1249]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:25:03.421321 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:25:03.424773 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:25:03.433195 initrd-setup-root[1275]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:25:03.442611 initrd-setup-root[1283]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:25:03.450494 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:25:03.500575 systemd-networkd[1183]: eth0: Gained IPv6LL Feb 12 20:25:03.642577 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:25:03.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.645840 systemd[1]: Starting ignition-mount.service... Feb 12 20:25:03.658662 kernel: audit: type=1130 audit(1707769503.643:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.658104 systemd[1]: Starting sysroot-boot.service... Feb 12 20:25:03.670685 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:25:03.670864 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:25:03.708885 systemd[1]: Finished sysroot-boot.service. Feb 12 20:25:03.712226 ignition[1311]: INFO : Ignition 2.14.0 Feb 12 20:25:03.712226 ignition[1311]: INFO : Stage: mount Feb 12 20:25:03.712226 ignition[1311]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:03.712226 ignition[1311]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:25:03.726171 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:25:03.727531 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:25:03.737177 ignition[1311]: INFO : PUT result: OK Feb 12 20:25:03.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.759332 kernel: audit: type=1130 audit(1707769503.748:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.761381 ignition[1311]: INFO : mount: mount passed Feb 12 20:25:03.763096 ignition[1311]: INFO : Ignition finished successfully Feb 12 20:25:03.766479 systemd[1]: Finished ignition-mount.service. Feb 12 20:25:03.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.777360 kernel: audit: type=1130 audit(1707769503.768:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:03.775694 systemd[1]: Starting ignition-files.service... Feb 12 20:25:03.791018 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:25:03.808340 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1319) Feb 12 20:25:03.814252 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:25:03.814313 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:25:03.814350 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:25:03.823344 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:25:03.827460 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:25:03.846598 ignition[1338]: INFO : Ignition 2.14.0 Feb 12 20:25:03.846598 ignition[1338]: INFO : Stage: files Feb 12 20:25:03.849946 ignition[1338]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:03.849946 ignition[1338]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:25:03.867012 ignition[1338]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:25:03.869578 ignition[1338]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:25:03.872924 ignition[1338]: INFO : PUT result: OK Feb 12 20:25:03.877764 ignition[1338]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:25:03.881982 ignition[1338]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:25:03.881982 ignition[1338]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:25:03.930521 ignition[1338]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:25:03.933629 ignition[1338]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:25:03.938040 ignition[1338]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:25:03.938040 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:25:03.938040 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:25:03.938040 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 20:25:03.938040 ignition[1338]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 20:25:03.933743 unknown[1338]: wrote ssh authorized keys file for user: core Feb 12 20:25:04.423655 ignition[1338]: INFO : GET result: OK Feb 12 20:25:04.878000 ignition[1338]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 20:25:04.883025 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 20:25:04.883025 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 20:25:04.883025 ignition[1338]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 20:25:05.262110 ignition[1338]: INFO : GET result: OK Feb 12 20:25:05.504855 ignition[1338]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 20:25:05.511705 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 20:25:05.511705 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 20:25:05.511705 ignition[1338]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:25:05.533388 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1338) Feb 12 20:25:05.533439 ignition[1338]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4189832701" Feb 12 20:25:05.533439 ignition[1338]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4189832701": device or resource busy Feb 12 20:25:05.533439 ignition[1338]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4189832701", trying btrfs: device or resource busy Feb 12 20:25:05.533439 ignition[1338]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4189832701" Feb 12 20:25:05.555371 ignition[1338]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4189832701" Feb 12 20:25:05.563429 ignition[1338]: INFO : op(3): [started] unmounting "/mnt/oem4189832701" Feb 12 20:25:05.563429 ignition[1338]: INFO : op(3): [finished] unmounting "/mnt/oem4189832701" Feb 12 20:25:05.563429 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 20:25:05.563429 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:25:05.563429 ignition[1338]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 20:25:05.571458 systemd[1]: mnt-oem4189832701.mount: Deactivated successfully. Feb 12 20:25:05.692604 ignition[1338]: INFO : GET result: OK Feb 12 20:25:06.233767 ignition[1338]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 20:25:06.238449 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:25:06.238449 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:25:06.245120 ignition[1338]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 20:25:06.312582 ignition[1338]: INFO : GET result: OK Feb 12 20:25:07.744787 ignition[1338]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 20:25:07.750313 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:25:07.750313 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:25:07.750313 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:25:07.750313 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:25:07.750313 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:25:07.750313 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:25:07.750313 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:25:07.750313 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 20:25:07.759913 ignition[1338]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:25:07.788486 ignition[1338]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3778485890" Feb 12 20:25:07.788486 ignition[1338]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3778485890": device or resource busy Feb 12 20:25:07.788486 ignition[1338]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3778485890", trying btrfs: device or resource busy Feb 12 20:25:07.788486 ignition[1338]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3778485890" Feb 12 20:25:07.803385 ignition[1338]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3778485890" Feb 12 20:25:07.803385 ignition[1338]: INFO : op(6): [started] unmounting "/mnt/oem3778485890" Feb 12 20:25:07.808410 ignition[1338]: INFO : op(6): [finished] unmounting "/mnt/oem3778485890" Feb 12 20:25:07.808410 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 20:25:07.808410 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 20:25:07.808410 ignition[1338]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:25:07.837275 ignition[1338]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2575382689" Feb 12 20:25:07.840264 ignition[1338]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2575382689": device or resource busy Feb 12 20:25:07.840264 ignition[1338]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2575382689", trying btrfs: device or resource busy Feb 12 20:25:07.840264 ignition[1338]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2575382689" Feb 12 20:25:07.849934 ignition[1338]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2575382689" Feb 12 20:25:07.849934 ignition[1338]: INFO : op(9): [started] unmounting "/mnt/oem2575382689" Feb 12 20:25:07.849934 ignition[1338]: INFO : op(9): [finished] unmounting "/mnt/oem2575382689" Feb 12 20:25:07.849934 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 20:25:07.849934 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 20:25:07.849934 ignition[1338]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:25:07.876428 ignition[1338]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3531263351" Feb 12 20:25:07.876428 ignition[1338]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3531263351": device or resource busy Feb 12 20:25:07.876428 ignition[1338]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3531263351", trying btrfs: device or resource busy Feb 12 20:25:07.876428 ignition[1338]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3531263351" Feb 12 20:25:07.889538 ignition[1338]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3531263351" Feb 12 20:25:07.889538 ignition[1338]: INFO : op(c): [started] unmounting "/mnt/oem3531263351" Feb 12 20:25:07.884009 systemd[1]: mnt-oem3531263351.mount: Deactivated successfully. Feb 12 20:25:07.901855 ignition[1338]: INFO : op(c): [finished] unmounting "/mnt/oem3531263351" Feb 12 20:25:07.904178 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 20:25:07.907934 ignition[1338]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:25:07.907934 ignition[1338]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:25:07.907934 ignition[1338]: INFO : files: op(10): [started] processing unit "amazon-ssm-agent.service" Feb 12 20:25:07.918021 ignition[1338]: INFO : files: op(10): op(11): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 20:25:07.925933 ignition[1338]: INFO : files: op(10): op(11): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 20:25:07.925933 ignition[1338]: INFO : files: op(10): [finished] processing unit "amazon-ssm-agent.service" Feb 12 20:25:07.932483 ignition[1338]: INFO : files: op(12): [started] processing unit "nvidia.service" Feb 12 20:25:07.932483 ignition[1338]: INFO : files: op(12): [finished] processing unit "nvidia.service" Feb 12 20:25:07.932483 ignition[1338]: INFO : files: op(13): [started] processing unit "containerd.service" Feb 12 20:25:07.939868 ignition[1338]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:25:07.944376 ignition[1338]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:25:07.944376 ignition[1338]: INFO : files: op(13): [finished] processing unit "containerd.service" Feb 12 20:25:07.944376 ignition[1338]: INFO : files: op(15): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:25:07.954031 ignition[1338]: INFO : files: op(15): op(16): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:25:07.958159 ignition[1338]: INFO : files: op(15): op(16): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:25:07.958159 ignition[1338]: INFO : files: op(15): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:25:07.958159 ignition[1338]: INFO : files: op(17): [started] processing unit "prepare-critools.service" Feb 12 20:25:07.967588 ignition[1338]: INFO : files: op(17): op(18): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:25:07.967588 ignition[1338]: INFO : files: op(17): op(18): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:25:07.967588 ignition[1338]: INFO : files: op(17): [finished] processing unit "prepare-critools.service" Feb 12 20:25:07.967588 ignition[1338]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:25:07.980920 ignition[1338]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:25:07.991205 ignition[1338]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:25:07.991205 ignition[1338]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:25:07.991205 ignition[1338]: INFO : files: op(1b): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 20:25:07.991205 ignition[1338]: INFO : files: op(1b): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 20:25:07.991205 ignition[1338]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Feb 12 20:25:07.991205 ignition[1338]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Feb 12 20:25:07.991205 ignition[1338]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:25:07.991205 ignition[1338]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:25:07.991205 ignition[1338]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:25:07.991205 ignition[1338]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:25:07.991205 ignition[1338]: INFO : files: files passed Feb 12 20:25:07.991205 ignition[1338]: INFO : Ignition finished successfully Feb 12 20:25:08.026201 systemd[1]: Finished ignition-files.service. Feb 12 20:25:08.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.037325 kernel: audit: type=1130 audit(1707769508.027:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.039051 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:25:08.041020 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:25:08.047821 systemd[1]: Starting ignition-quench.service... Feb 12 20:25:08.056247 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:25:08.075546 kernel: audit: type=1130 audit(1707769508.056:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.075586 kernel: audit: type=1131 audit(1707769508.056:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.056468 systemd[1]: Finished ignition-quench.service. Feb 12 20:25:08.081889 initrd-setup-root-after-ignition[1363]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:25:08.086420 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:25:08.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.090449 systemd[1]: Reached target ignition-complete.target. Feb 12 20:25:08.102081 kernel: audit: type=1130 audit(1707769508.088:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.102844 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:25:08.130765 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:25:08.131151 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:25:08.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.140567 systemd[1]: Reached target initrd-fs.target. Feb 12 20:25:08.165826 kernel: audit: type=1130 audit(1707769508.135:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.165870 kernel: audit: type=1131 audit(1707769508.139:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.153482 systemd[1]: Reached target initrd.target. Feb 12 20:25:08.155081 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:25:08.156557 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:25:08.183202 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:25:08.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.193820 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:25:08.203372 kernel: audit: type=1130 audit(1707769508.182:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.213392 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:25:08.216656 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:25:08.223264 systemd[1]: Stopped target timers.target. Feb 12 20:25:08.227013 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:25:08.229061 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:25:08.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.232477 systemd[1]: Stopped target initrd.target. Feb 12 20:25:08.242600 kernel: audit: type=1131 audit(1707769508.231:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.242638 systemd[1]: Stopped target basic.target. Feb 12 20:25:08.245742 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:25:08.253321 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:25:08.256970 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:25:08.260644 systemd[1]: Stopped target remote-fs.target. Feb 12 20:25:08.268104 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:25:08.271625 systemd[1]: Stopped target sysinit.target. Feb 12 20:25:08.274769 systemd[1]: Stopped target local-fs.target. Feb 12 20:25:08.277872 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:25:08.281247 systemd[1]: Stopped target swap.target. Feb 12 20:25:08.287937 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:25:08.290011 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:25:08.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.293417 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:25:08.315101 kernel: audit: type=1131 audit(1707769508.292:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.315151 kernel: audit: type=1131 audit(1707769508.300:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.303533 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:25:08.303742 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:25:08.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.305763 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:25:08.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.306069 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:25:08.315381 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:25:08.350832 ignition[1376]: INFO : Ignition 2.14.0 Feb 12 20:25:08.350832 ignition[1376]: INFO : Stage: umount Feb 12 20:25:08.350832 ignition[1376]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:25:08.350832 ignition[1376]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:25:08.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.317207 systemd[1]: Stopped ignition-files.service. Feb 12 20:25:08.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.386646 ignition[1376]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:25:08.386646 ignition[1376]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:25:08.386646 ignition[1376]: INFO : PUT result: OK Feb 12 20:25:08.322116 systemd[1]: Stopping ignition-mount.service... Feb 12 20:25:08.330816 systemd[1]: Stopping iscsiuio.service... Feb 12 20:25:08.333442 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:25:08.333871 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:25:08.342555 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:25:08.350922 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:25:08.351211 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:25:08.356446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:25:08.356734 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:25:08.375659 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:25:08.376128 systemd[1]: Stopped iscsiuio.service. Feb 12 20:25:08.382350 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:25:08.382551 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:25:08.422055 ignition[1376]: INFO : umount: umount passed Feb 12 20:25:08.429962 ignition[1376]: INFO : Ignition finished successfully Feb 12 20:25:08.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.423943 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:25:08.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.424125 systemd[1]: Stopped ignition-mount.service. Feb 12 20:25:08.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.431912 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:25:08.432012 systemd[1]: Stopped ignition-disks.service. Feb 12 20:25:08.435501 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:25:08.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.435590 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:25:08.438502 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:25:08.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.438581 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:25:08.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.440243 systemd[1]: Stopped target network.target. Feb 12 20:25:08.441846 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:25:08.441936 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:25:08.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.444168 systemd[1]: Stopped target paths.target. Feb 12 20:25:08.446698 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:25:08.449376 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:25:08.451921 systemd[1]: Stopped target slices.target. Feb 12 20:25:08.453451 systemd[1]: Stopped target sockets.target. Feb 12 20:25:08.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.455106 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:25:08.455163 systemd[1]: Closed iscsid.socket. Feb 12 20:25:08.456585 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:25:08.456659 systemd[1]: Closed iscsiuio.socket. Feb 12 20:25:08.458077 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:25:08.458161 systemd[1]: Stopped ignition-setup.service. Feb 12 20:25:08.460825 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:25:08.463194 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:25:08.465672 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:25:08.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.465884 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:25:08.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.468683 systemd-networkd[1183]: eth0: DHCPv6 lease lost Feb 12 20:25:08.541000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:25:08.469856 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:25:08.469959 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:25:08.473931 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:25:08.474279 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:25:08.490106 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:25:08.490178 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:25:08.493263 systemd[1]: Stopping network-cleanup.service... Feb 12 20:25:08.494890 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:25:08.495005 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:25:08.497008 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:25:08.497130 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:25:08.501495 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:25:08.501596 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:25:08.513649 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:25:08.568000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:25:08.526995 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:25:08.527622 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:25:08.538918 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:25:08.539781 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:25:08.569045 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:25:08.569170 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:25:08.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.581058 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:25:08.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.581139 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:25:08.584134 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:25:08.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.584223 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:25:08.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:08.586660 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:25:08.586741 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:25:08.589132 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:25:08.589211 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:25:08.592620 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:25:08.606781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:25:08.640000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:25:08.640000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:25:08.645000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:25:08.645000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:25:08.645000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:25:08.606899 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:25:08.610948 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:25:08.611161 systemd[1]: Stopped network-cleanup.service. Feb 12 20:25:08.614410 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:25:08.614594 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:25:08.617228 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:25:08.621512 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:25:08.639326 systemd[1]: Switching root. Feb 12 20:25:08.683334 systemd-journald[308]: Received SIGTERM from PID 1 (n/a). Feb 12 20:25:08.683397 iscsid[1188]: iscsid shutting down. Feb 12 20:25:08.685471 systemd-journald[308]: Journal stopped Feb 12 20:25:14.474099 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:25:14.474220 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:25:14.474261 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:25:14.474314 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:25:14.474348 kernel: SELinux: policy capability open_perms=1 Feb 12 20:25:14.474380 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:25:14.474411 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:25:14.474442 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:25:14.474480 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:25:14.474511 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:25:14.474541 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:25:14.474574 systemd[1]: Successfully loaded SELinux policy in 106.733ms. Feb 12 20:25:14.474633 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.351ms. Feb 12 20:25:14.474677 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:25:14.474710 systemd[1]: Detected virtualization amazon. Feb 12 20:25:14.474743 systemd[1]: Detected architecture arm64. Feb 12 20:25:14.474777 systemd[1]: Detected first boot. Feb 12 20:25:14.474809 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:25:14.474839 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:25:14.474870 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:25:14.474903 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:14.474942 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:14.474977 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:14.475012 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:25:14.475043 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:25:14.475075 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:25:14.475107 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:25:14.475139 systemd[1]: Created slice system-getty.slice. Feb 12 20:25:14.475177 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:25:14.475224 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:25:14.475258 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:25:14.475305 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:25:14.475343 systemd[1]: Created slice user.slice. Feb 12 20:25:14.475376 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:25:14.475406 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:25:14.475438 systemd[1]: Set up automount boot.automount. Feb 12 20:25:14.475470 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:25:14.475505 systemd[1]: Reached target integritysetup.target. Feb 12 20:25:14.475538 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:25:14.475569 systemd[1]: Reached target remote-fs.target. Feb 12 20:25:14.475599 systemd[1]: Reached target slices.target. Feb 12 20:25:14.475628 systemd[1]: Reached target swap.target. Feb 12 20:25:14.475659 systemd[1]: Reached target torcx.target. Feb 12 20:25:14.475689 systemd[1]: Reached target veritysetup.target. Feb 12 20:25:14.475718 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:25:14.475752 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:25:14.475783 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 12 20:25:14.475815 kernel: audit: type=1400 audit(1707769514.147:87): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:25:14.475846 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:25:14.475880 kernel: audit: type=1335 audit(1707769514.150:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 20:25:14.475911 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:25:14.475940 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:25:14.475971 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:25:14.476002 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:25:14.476036 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:25:14.476066 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:25:14.476096 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:25:14.476126 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:25:14.476155 systemd[1]: Mounting media.mount... Feb 12 20:25:14.476186 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:25:14.476215 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:25:14.476247 systemd[1]: Mounting tmp.mount... Feb 12 20:25:14.476276 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:25:14.485131 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:25:14.485177 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:25:14.485212 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:25:14.485245 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:25:14.485275 systemd[1]: Starting modprobe@drm.service... Feb 12 20:25:14.485339 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:25:14.485372 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:25:14.485402 systemd[1]: Starting modprobe@loop.service... Feb 12 20:25:14.485435 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:25:14.485470 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 20:25:14.485503 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 20:25:14.485536 systemd[1]: Starting systemd-journald.service... Feb 12 20:25:14.485568 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:25:14.485605 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:25:14.485635 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:25:14.485673 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:25:14.485705 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:25:14.485739 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:25:14.485786 systemd[1]: Mounted media.mount. Feb 12 20:25:14.485822 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:25:14.485855 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:25:14.485887 systemd[1]: Mounted tmp.mount. Feb 12 20:25:14.485917 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:25:14.485949 kernel: audit: type=1130 audit(1707769514.448:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.485979 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:25:14.486010 kernel: audit: type=1305 audit(1707769514.460:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:25:14.486043 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:25:14.486075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:25:14.486106 kernel: audit: type=1300 audit(1707769514.460:90): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc9e5d970 a2=4000 a3=1 items=0 ppid=1 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:14.486140 systemd-journald[1543]: Journal started Feb 12 20:25:14.486240 systemd-journald[1543]: Runtime Journal (/run/log/journal/ec22fd32eb0d445ea42f64496bd2b690) is 8.0M, max 75.4M, 67.4M free. Feb 12 20:25:14.150000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 20:25:14.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.460000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:25:14.460000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc9e5d970 a2=4000 a3=1 items=0 ppid=1 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:14.494602 kernel: fuse: init (API version 7.34) Feb 12 20:25:14.494674 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:25:14.499936 systemd[1]: Started systemd-journald.service. Feb 12 20:25:14.460000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:25:14.508852 kernel: audit: type=1327 audit(1707769514.460:90): proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:25:14.510163 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:25:14.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.528471 kernel: audit: type=1130 audit(1707769514.471:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.528557 kernel: audit: type=1131 audit(1707769514.471:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.530831 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:25:14.531356 systemd[1]: Finished modprobe@drm.service. Feb 12 20:25:14.534602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:25:14.550159 kernel: audit: type=1130 audit(1707769514.497:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.535695 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:25:14.539213 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:25:14.539768 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:25:14.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.559047 kernel: audit: type=1131 audit(1707769514.497:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.552030 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:25:14.562100 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:25:14.564500 kernel: loop: module loaded Feb 12 20:25:14.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.565109 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:25:14.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.567608 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:25:14.568006 systemd[1]: Finished modprobe@loop.service. Feb 12 20:25:14.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.570578 systemd[1]: Reached target network-pre.target. Feb 12 20:25:14.575865 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:25:14.584412 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:25:14.588561 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:25:14.596873 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:25:14.603558 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:25:14.605478 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:25:14.607970 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:25:14.611635 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:25:14.616427 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:25:14.621003 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:25:14.635264 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:25:14.637412 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:25:14.659549 systemd-journald[1543]: Time spent on flushing to /var/log/journal/ec22fd32eb0d445ea42f64496bd2b690 is 80.599ms for 1086 entries. Feb 12 20:25:14.659549 systemd-journald[1543]: System Journal (/var/log/journal/ec22fd32eb0d445ea42f64496bd2b690) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:25:14.764892 systemd-journald[1543]: Received client request to flush runtime journal. Feb 12 20:25:14.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.688960 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:25:14.691169 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:25:14.707042 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:25:14.747958 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:25:14.752221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:25:14.766818 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:25:14.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.814121 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:25:14.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:14.818743 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:25:14.834975 udevadm[1584]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 20:25:14.863233 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:25:14.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:15.594816 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:25:15.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:15.599097 systemd[1]: Starting systemd-udevd.service... Feb 12 20:25:15.641343 systemd-udevd[1587]: Using default interface naming scheme 'v252'. Feb 12 20:25:15.691678 systemd[1]: Started systemd-udevd.service. Feb 12 20:25:15.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:15.696584 systemd[1]: Starting systemd-networkd.service... Feb 12 20:25:15.705730 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:25:15.783018 systemd[1]: Found device dev-ttyS0.device. Feb 12 20:25:15.809955 systemd[1]: Started systemd-userdbd.service. Feb 12 20:25:15.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:15.845110 (udev-worker)[1596]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:16.016787 systemd-networkd[1592]: lo: Link UP Feb 12 20:25:16.016810 systemd-networkd[1592]: lo: Gained carrier Feb 12 20:25:16.017736 systemd-networkd[1592]: Enumeration completed Feb 12 20:25:16.017944 systemd[1]: Started systemd-networkd.service. Feb 12 20:25:16.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.023770 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:25:16.024157 systemd-networkd[1592]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:25:16.030375 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1620) Feb 12 20:25:16.030506 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:25:16.031280 systemd-networkd[1592]: eth0: Link UP Feb 12 20:25:16.031605 systemd-networkd[1592]: eth0: Gained carrier Feb 12 20:25:16.061582 systemd-networkd[1592]: eth0: DHCPv4 address 172.31.25.148/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 20:25:16.200655 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 20:25:16.201463 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:25:16.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.205858 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:25:16.251084 lvm[1707]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:25:16.287988 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:25:16.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.290518 systemd[1]: Reached target cryptsetup.target. Feb 12 20:25:16.294894 systemd[1]: Starting lvm2-activation.service... Feb 12 20:25:16.304214 lvm[1709]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:25:16.345200 systemd[1]: Finished lvm2-activation.service. Feb 12 20:25:16.347638 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:25:16.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.349534 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:25:16.349596 systemd[1]: Reached target local-fs.target. Feb 12 20:25:16.351632 systemd[1]: Reached target machines.target. Feb 12 20:25:16.357384 systemd[1]: Starting ldconfig.service... Feb 12 20:25:16.359827 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:25:16.359958 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:25:16.362618 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:25:16.366751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:25:16.372023 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:25:16.374150 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:25:16.374497 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:25:16.377215 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:25:16.391449 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1712 (bootctl) Feb 12 20:25:16.393855 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:25:16.414501 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:25:16.417485 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:25:16.420784 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:25:16.447949 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:25:16.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.547418 systemd-fsck[1721]: fsck.fat 4.2 (2021-01-31) Feb 12 20:25:16.547418 systemd-fsck[1721]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 12 20:25:16.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.550045 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:25:16.555416 systemd[1]: Mounting boot.mount... Feb 12 20:25:16.587484 systemd[1]: Mounted boot.mount. Feb 12 20:25:16.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.614424 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:25:16.779449 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:25:16.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.784127 systemd[1]: Starting audit-rules.service... Feb 12 20:25:16.790541 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:25:16.795030 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:25:16.806992 systemd[1]: Starting systemd-resolved.service... Feb 12 20:25:16.814186 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:25:16.821779 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:25:16.825999 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:25:16.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.829201 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:25:16.868000 audit[1747]: SYSTEM_BOOT pid=1747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.873757 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:25:16.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.892650 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:25:16.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.949000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:25:16.949000 audit[1762]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffec3f3270 a2=420 a3=0 items=0 ppid=1739 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:16.949000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:25:16.950791 augenrules[1762]: No rules Feb 12 20:25:16.952439 systemd[1]: Finished audit-rules.service. Feb 12 20:25:17.012431 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:25:17.014603 systemd[1]: Reached target time-set.target. Feb 12 20:25:17.025709 systemd-resolved[1743]: Positive Trust Anchors: Feb 12 20:25:17.026236 systemd-resolved[1743]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:25:17.026794 systemd-resolved[1743]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:25:17.045099 systemd-resolved[1743]: Defaulting to hostname 'linux'. Feb 12 20:25:17.050476 systemd[1]: Started systemd-resolved.service. Feb 12 20:25:17.052494 systemd[1]: Reached target network.target. Feb 12 20:25:17.054265 systemd[1]: Reached target nss-lookup.target. Feb 12 20:25:17.540894 systemd-resolved[1743]: Clock change detected. Flushing caches. Feb 12 20:25:17.540900 systemd-timesyncd[1744]: Contacted time server 131.153.171.22:123 (0.flatcar.pool.ntp.org). Feb 12 20:25:17.541032 systemd-timesyncd[1744]: Initial clock synchronization to Mon 2024-02-12 20:25:17.540667 UTC. Feb 12 20:25:17.631899 systemd-networkd[1592]: eth0: Gained IPv6LL Feb 12 20:25:17.635017 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:25:17.637245 systemd[1]: Reached target network-online.target. Feb 12 20:25:17.869718 ldconfig[1711]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:25:17.912844 systemd[1]: Finished ldconfig.service. Feb 12 20:25:17.917142 systemd[1]: Starting systemd-update-done.service... Feb 12 20:25:17.932389 systemd[1]: Finished systemd-update-done.service. Feb 12 20:25:17.934433 systemd[1]: Reached target sysinit.target. Feb 12 20:25:17.936255 systemd[1]: Started motdgen.path. Feb 12 20:25:17.937878 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:25:17.940430 systemd[1]: Started logrotate.timer. Feb 12 20:25:17.942222 systemd[1]: Started mdadm.timer. Feb 12 20:25:17.943632 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:25:17.945401 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:25:17.945454 systemd[1]: Reached target paths.target. Feb 12 20:25:17.946973 systemd[1]: Reached target timers.target. Feb 12 20:25:17.949096 systemd[1]: Listening on dbus.socket. Feb 12 20:25:17.952967 systemd[1]: Starting docker.socket... Feb 12 20:25:17.956609 systemd[1]: Listening on sshd.socket. Feb 12 20:25:17.958390 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:25:17.959260 systemd[1]: Listening on docker.socket. Feb 12 20:25:17.960946 systemd[1]: Reached target sockets.target. Feb 12 20:25:17.962981 systemd[1]: Reached target basic.target. Feb 12 20:25:17.964999 systemd[1]: System is tainted: cgroupsv1 Feb 12 20:25:17.965252 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:25:17.965449 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:25:17.968234 systemd[1]: Started amazon-ssm-agent.service. Feb 12 20:25:17.972936 systemd[1]: Starting containerd.service... Feb 12 20:25:17.978998 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:25:17.983937 systemd[1]: Starting dbus.service... Feb 12 20:25:17.993827 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:25:18.001616 systemd[1]: Starting extend-filesystems.service... Feb 12 20:25:18.005497 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:25:18.013684 systemd[1]: Starting motdgen.service... Feb 12 20:25:18.032351 systemd[1]: Started nvidia.service. Feb 12 20:25:18.045901 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:25:18.159500 jq[1782]: false Feb 12 20:25:18.050311 systemd[1]: Starting prepare-critools.service... Feb 12 20:25:18.068290 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:25:18.090895 systemd[1]: Starting sshd-keygen.service... Feb 12 20:25:18.107628 systemd[1]: Starting systemd-logind.service... Feb 12 20:25:18.116974 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:25:18.117130 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:25:18.132974 systemd[1]: Starting update-engine.service... Feb 12 20:25:18.137753 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:25:18.164551 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:25:18.165099 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:25:18.202386 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:25:18.205707 jq[1794]: true Feb 12 20:25:18.202915 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:25:18.226589 tar[1804]: ./ Feb 12 20:25:18.226589 tar[1804]: ./macvlan Feb 12 20:25:18.240616 tar[1801]: crictl Feb 12 20:25:18.262761 jq[1816]: true Feb 12 20:25:18.293775 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:25:18.335808 systemd[1]: Finished motdgen.service. Feb 12 20:25:18.337983 dbus-daemon[1780]: [system] SELinux support is enabled Feb 12 20:25:18.338262 systemd[1]: Started dbus.service. Feb 12 20:25:18.343098 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:25:18.343154 systemd[1]: Reached target system-config.target. Feb 12 20:25:18.345054 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:25:18.345100 systemd[1]: Reached target user-config.target. Feb 12 20:25:18.348349 dbus-daemon[1780]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1592 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 12 20:25:18.351450 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 20:25:18.359847 systemd[1]: Starting systemd-hostnamed.service... Feb 12 20:25:18.386060 extend-filesystems[1783]: Found nvme0n1 Feb 12 20:25:18.399230 extend-filesystems[1783]: Found nvme0n1p1 Feb 12 20:25:18.401368 extend-filesystems[1783]: Found nvme0n1p2 Feb 12 20:25:18.403135 extend-filesystems[1783]: Found nvme0n1p3 Feb 12 20:25:18.408418 extend-filesystems[1783]: Found usr Feb 12 20:25:18.410196 extend-filesystems[1783]: Found nvme0n1p4 Feb 12 20:25:18.417927 extend-filesystems[1783]: Found nvme0n1p6 Feb 12 20:25:18.422905 extend-filesystems[1783]: Found nvme0n1p7 Feb 12 20:25:18.424881 extend-filesystems[1783]: Found nvme0n1p9 Feb 12 20:25:18.430411 extend-filesystems[1783]: Checking size of /dev/nvme0n1p9 Feb 12 20:25:18.473811 extend-filesystems[1783]: Resized partition /dev/nvme0n1p9 Feb 12 20:25:18.482382 extend-filesystems[1855]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:25:18.506300 amazon-ssm-agent[1776]: 2024/02/12 20:25:18 Failed to load instance info from vault. RegistrationKey does not exist. Feb 12 20:25:18.515504 update_engine[1793]: I0212 20:25:18.514168 1793 main.cc:92] Flatcar Update Engine starting Feb 12 20:25:18.523699 systemd[1]: Started update-engine.service. Feb 12 20:25:18.527135 update_engine[1793]: I0212 20:25:18.527091 1793 update_check_scheduler.cc:74] Next update check in 6m56s Feb 12 20:25:18.530292 systemd[1]: Started locksmithd.service. Feb 12 20:25:18.538876 amazon-ssm-agent[1776]: Initializing new seelog logger Feb 12 20:25:18.539142 amazon-ssm-agent[1776]: New Seelog Logger Creation Complete Feb 12 20:25:18.539235 amazon-ssm-agent[1776]: 2024/02/12 20:25:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 20:25:18.539235 amazon-ssm-agent[1776]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 20:25:18.547848 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 12 20:25:18.550327 amazon-ssm-agent[1776]: 2024/02/12 20:25:18 processing appconfig overrides Feb 12 20:25:18.633758 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 12 20:25:18.668675 extend-filesystems[1855]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 12 20:25:18.668675 extend-filesystems[1855]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:25:18.668675 extend-filesystems[1855]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 12 20:25:18.668144 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:25:18.697074 bash[1856]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:25:18.700787 extend-filesystems[1783]: Resized filesystem in /dev/nvme0n1p9 Feb 12 20:25:18.668670 systemd[1]: Finished extend-filesystems.service. Feb 12 20:25:18.689979 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:25:18.717539 env[1808]: time="2024-02-12T20:25:18.717446306Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:25:18.746518 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 12 20:25:18.746796 systemd[1]: Started systemd-hostnamed.service. Feb 12 20:25:18.751029 dbus-daemon[1780]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1833 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 12 20:25:18.756821 tar[1804]: ./static Feb 12 20:25:18.793481 systemd[1]: Starting polkit.service... Feb 12 20:25:18.797802 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:25:18.846307 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:25:18.849448 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 20:25:18.854952 systemd-logind[1792]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 20:25:18.868278 systemd-logind[1792]: New seat seat0. Feb 12 20:25:18.880499 systemd[1]: Started systemd-logind.service. Feb 12 20:25:18.902169 polkitd[1891]: Started polkitd version 121 Feb 12 20:25:18.947151 env[1808]: time="2024-02-12T20:25:18.947012115Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:25:18.947314 env[1808]: time="2024-02-12T20:25:18.947270679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:18.955008 env[1808]: time="2024-02-12T20:25:18.954930507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:25:18.955008 env[1808]: time="2024-02-12T20:25:18.954998835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:18.955518 env[1808]: time="2024-02-12T20:25:18.955458051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:25:18.955627 env[1808]: time="2024-02-12T20:25:18.955513695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:18.955627 env[1808]: time="2024-02-12T20:25:18.955548999Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:25:18.955627 env[1808]: time="2024-02-12T20:25:18.955573995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:18.955820 env[1808]: time="2024-02-12T20:25:18.955770735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:18.956309 env[1808]: time="2024-02-12T20:25:18.956244735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:18.962059 polkitd[1891]: Loading rules from directory /etc/polkit-1/rules.d Feb 12 20:25:18.962764 env[1808]: time="2024-02-12T20:25:18.962679603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:25:18.962870 env[1808]: time="2024-02-12T20:25:18.962758647Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:25:18.962947 env[1808]: time="2024-02-12T20:25:18.962909175Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:25:18.963019 env[1808]: time="2024-02-12T20:25:18.962939847Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:25:18.963180 polkitd[1891]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 12 20:25:18.968986 polkitd[1891]: Finished loading, compiling and executing 2 rules Feb 12 20:25:18.970382 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 12 20:25:18.970623 systemd[1]: Started polkit.service. Feb 12 20:25:18.980985 polkitd[1891]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 12 20:25:18.982079 env[1808]: time="2024-02-12T20:25:18.981995415Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:25:18.982197 env[1808]: time="2024-02-12T20:25:18.982094511Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:25:18.982197 env[1808]: time="2024-02-12T20:25:18.982178247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:25:18.982304 env[1808]: time="2024-02-12T20:25:18.982275639Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:25:18.982460 env[1808]: time="2024-02-12T20:25:18.982419435Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:25:18.982538 env[1808]: time="2024-02-12T20:25:18.982465791Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:25:18.982538 env[1808]: time="2024-02-12T20:25:18.982522827Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:25:18.983409 env[1808]: time="2024-02-12T20:25:18.983325747Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:25:18.983550 env[1808]: time="2024-02-12T20:25:18.983415855Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:25:18.983550 env[1808]: time="2024-02-12T20:25:18.983473731Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:25:18.983550 env[1808]: time="2024-02-12T20:25:18.983505531Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:25:18.983777 env[1808]: time="2024-02-12T20:25:18.983565027Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:25:18.983955 env[1808]: time="2024-02-12T20:25:18.983888535Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:25:18.984255 env[1808]: time="2024-02-12T20:25:18.984163095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:25:18.985222 env[1808]: time="2024-02-12T20:25:18.985138467Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:25:18.985371 env[1808]: time="2024-02-12T20:25:18.985242075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.985371 env[1808]: time="2024-02-12T20:25:18.985333371Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:25:18.985527 env[1808]: time="2024-02-12T20:25:18.985486539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.985586 env[1808]: time="2024-02-12T20:25:18.985522815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.985642 env[1808]: time="2024-02-12T20:25:18.985578423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.985642 env[1808]: time="2024-02-12T20:25:18.985611207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.985774 env[1808]: time="2024-02-12T20:25:18.985664523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.985774 env[1808]: time="2024-02-12T20:25:18.985701663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.985894 env[1808]: time="2024-02-12T20:25:18.985769991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.985894 env[1808]: time="2024-02-12T20:25:18.985801323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.985894 env[1808]: time="2024-02-12T20:25:18.985865211Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:25:18.986324 env[1808]: time="2024-02-12T20:25:18.986274795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.986423 env[1808]: time="2024-02-12T20:25:18.986325783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.986423 env[1808]: time="2024-02-12T20:25:18.986383707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.986532 env[1808]: time="2024-02-12T20:25:18.986414991Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:25:18.986532 env[1808]: time="2024-02-12T20:25:18.986474067Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:25:18.986652 env[1808]: time="2024-02-12T20:25:18.986501907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:25:18.986652 env[1808]: time="2024-02-12T20:25:18.986561979Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:25:18.986800 env[1808]: time="2024-02-12T20:25:18.986648955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:25:18.987411 env[1808]: time="2024-02-12T20:25:18.987231279Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:25:18.988498 env[1808]: time="2024-02-12T20:25:18.987412407Z" level=info msg="Connect containerd service" Feb 12 20:25:18.988498 env[1808]: time="2024-02-12T20:25:18.987498507Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:25:18.995524 env[1808]: time="2024-02-12T20:25:18.995433123Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:25:18.995793 env[1808]: time="2024-02-12T20:25:18.995698803Z" level=info msg="Start subscribing containerd event" Feb 12 20:25:18.995869 env[1808]: time="2024-02-12T20:25:18.995836875Z" level=info msg="Start recovering state" Feb 12 20:25:18.996040 env[1808]: time="2024-02-12T20:25:18.995978679Z" level=info msg="Start event monitor" Feb 12 20:25:18.996112 env[1808]: time="2024-02-12T20:25:18.996052395Z" level=info msg="Start snapshots syncer" Feb 12 20:25:18.996112 env[1808]: time="2024-02-12T20:25:18.996098991Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:25:18.996229 env[1808]: time="2024-02-12T20:25:18.996124407Z" level=info msg="Start streaming server" Feb 12 20:25:19.002403 env[1808]: time="2024-02-12T20:25:18.997978059Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:25:19.002403 env[1808]: time="2024-02-12T20:25:18.998149551Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:25:19.002403 env[1808]: time="2024-02-12T20:25:19.000555263Z" level=info msg="containerd successfully booted in 0.351075s" Feb 12 20:25:18.998449 systemd[1]: Started containerd.service. Feb 12 20:25:19.063275 systemd-resolved[1743]: System hostname changed to 'ip-172-31-25-148'. Feb 12 20:25:19.063277 systemd-hostnamed[1833]: Hostname set to (transient) Feb 12 20:25:19.071236 tar[1804]: ./vlan Feb 12 20:25:19.224313 coreos-metadata[1779]: Feb 12 20:25:19.224 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 20:25:19.227940 coreos-metadata[1779]: Feb 12 20:25:19.227 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 12 20:25:19.229100 coreos-metadata[1779]: Feb 12 20:25:19.228 INFO Fetch successful Feb 12 20:25:19.229100 coreos-metadata[1779]: Feb 12 20:25:19.229 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 20:25:19.230188 coreos-metadata[1779]: Feb 12 20:25:19.230 INFO Fetch successful Feb 12 20:25:19.233506 unknown[1779]: wrote ssh authorized keys file for user: core Feb 12 20:25:19.273116 update-ssh-keys[1947]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:25:19.274002 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:25:19.313262 tar[1804]: ./portmap Feb 12 20:25:19.491673 tar[1804]: ./host-local Feb 12 20:25:19.499756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Create new startup processor Feb 12 20:25:19.499756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [LongRunningPluginsManager] registered plugins: {} Feb 12 20:25:19.499756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Initializing bookkeeping folders Feb 12 20:25:19.499756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO removing the completed state files Feb 12 20:25:19.499756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Initializing bookkeeping folders for long running plugins Feb 12 20:25:19.499756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 12 20:25:19.499756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Initializing healthcheck folders for long running plugins Feb 12 20:25:19.499756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Initializing locations for inventory plugin Feb 12 20:25:19.499756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Initializing default location for custom inventory Feb 12 20:25:19.500340 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Initializing default location for file inventory Feb 12 20:25:19.500456 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Initializing default location for role inventory Feb 12 20:25:19.500575 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Init the cloudwatchlogs publisher Feb 12 20:25:19.500692 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform independent plugin aws:softwareInventory Feb 12 20:25:19.500967 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 12 20:25:19.501094 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform independent plugin aws:refreshAssociation Feb 12 20:25:19.501238 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform independent plugin aws:configurePackage Feb 12 20:25:19.501362 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 12 20:25:19.501476 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform independent plugin aws:configureDocker Feb 12 20:25:19.501600 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform independent plugin aws:runDockerAction Feb 12 20:25:19.501716 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform independent plugin aws:downloadContent Feb 12 20:25:19.501872 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform independent plugin aws:runDocument Feb 12 20:25:19.501988 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Successfully loaded platform dependent plugin aws:runShellScript Feb 12 20:25:19.502114 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 12 20:25:19.502235 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO OS: linux, Arch: arm64 Feb 12 20:25:19.503040 amazon-ssm-agent[1776]: datastore file /var/lib/amazon/ssm/i-08585822e30ae45f2/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 12 20:25:19.610851 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessageGatewayService] Starting session document processing engine... Feb 12 20:25:19.664780 tar[1804]: ./vrf Feb 12 20:25:19.706688 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 12 20:25:19.776429 tar[1804]: ./bridge Feb 12 20:25:19.801060 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 12 20:25:19.852002 tar[1804]: ./tuning Feb 12 20:25:19.895569 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [OfflineService] Starting document processing engine... Feb 12 20:25:19.906073 tar[1804]: ./firewall Feb 12 20:25:19.990272 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [OfflineService] [EngineProcessor] Starting Feb 12 20:25:20.013467 tar[1804]: ./host-device Feb 12 20:25:20.085247 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [OfflineService] [EngineProcessor] Initial processing Feb 12 20:25:20.091099 tar[1804]: ./sbr Feb 12 20:25:20.152924 tar[1804]: ./loopback Feb 12 20:25:20.180281 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-08585822e30ae45f2, requestId: 5995a5ac-ae61-433c-92c9-6410e610dcaf Feb 12 20:25:20.252010 tar[1804]: ./dhcp Feb 12 20:25:20.275568 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [OfflineService] Starting message polling Feb 12 20:25:20.371119 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [OfflineService] Starting send replies to MDS Feb 12 20:25:20.396671 systemd[1]: Finished prepare-critools.service. Feb 12 20:25:20.448089 tar[1804]: ./ptp Feb 12 20:25:20.466769 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 12 20:25:20.510805 tar[1804]: ./ipvlan Feb 12 20:25:20.562658 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] Starting document processing engine... Feb 12 20:25:20.572039 tar[1804]: ./bandwidth Feb 12 20:25:20.658951 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:25:20.663026 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 12 20:25:20.701872 locksmithd[1858]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:25:20.758918 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 12 20:25:20.855353 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] Starting message polling Feb 12 20:25:20.952097 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 12 20:25:21.048847 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [instanceID=i-08585822e30ae45f2] Starting association polling Feb 12 20:25:21.145895 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 12 20:25:21.243202 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 12 20:25:21.340580 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 12 20:25:21.389530 sshd_keygen[1825]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:25:21.425602 systemd[1]: Finished sshd-keygen.service. Feb 12 20:25:21.430485 systemd[1]: Starting issuegen.service... Feb 12 20:25:21.438247 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 12 20:25:21.443444 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:25:21.444031 systemd[1]: Finished issuegen.service. Feb 12 20:25:21.449210 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:25:21.465436 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:25:21.470365 systemd[1]: Started getty@tty1.service. Feb 12 20:25:21.475391 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:25:21.477606 systemd[1]: Reached target getty.target. Feb 12 20:25:21.480365 systemd[1]: Reached target multi-user.target. Feb 12 20:25:21.486270 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:25:21.500829 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:25:21.501587 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:25:21.504695 systemd[1]: Startup finished in 12.095s (kernel) + 11.520s (userspace) = 23.615s. Feb 12 20:25:21.536136 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 12 20:25:21.634193 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessageGatewayService] listening reply. Feb 12 20:25:21.732455 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 20:25:21.831011 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 12 20:25:21.929520 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 12 20:25:22.028342 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [StartupProcessor] Executing startup processor tasks Feb 12 20:25:22.127442 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 12 20:25:22.226562 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 12 20:25:22.325974 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 12 20:25:22.425648 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-08585822e30ae45f2?role=subscribe&stream=input Feb 12 20:25:22.525416 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-08585822e30ae45f2?role=subscribe&stream=input Feb 12 20:25:22.625403 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessageGatewayService] Starting receiving message from control channel Feb 12 20:25:22.725756 amazon-ssm-agent[1776]: 2024-02-12 20:25:19 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 12 20:25:27.511594 systemd[1]: Created slice system-sshd.slice. Feb 12 20:25:27.514251 systemd[1]: Started sshd@0-172.31.25.148:22-147.75.109.163:39396.service. Feb 12 20:25:27.693291 sshd[2025]: Accepted publickey for core from 147.75.109.163 port 39396 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:27.697343 sshd[2025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:27.714232 systemd[1]: Created slice user-500.slice. Feb 12 20:25:27.716354 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:25:27.721662 systemd-logind[1792]: New session 1 of user core. Feb 12 20:25:27.737651 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:25:27.741062 systemd[1]: Starting user@500.service... Feb 12 20:25:27.749437 (systemd)[2030]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:27.923470 systemd[2030]: Queued start job for default target default.target. Feb 12 20:25:27.923906 systemd[2030]: Reached target paths.target. Feb 12 20:25:27.923945 systemd[2030]: Reached target sockets.target. Feb 12 20:25:27.923977 systemd[2030]: Reached target timers.target. Feb 12 20:25:27.924006 systemd[2030]: Reached target basic.target. Feb 12 20:25:27.924101 systemd[2030]: Reached target default.target. Feb 12 20:25:27.924162 systemd[2030]: Startup finished in 163ms. Feb 12 20:25:27.925677 systemd[1]: Started user@500.service. Feb 12 20:25:27.927597 systemd[1]: Started session-1.scope. Feb 12 20:25:28.076354 systemd[1]: Started sshd@1-172.31.25.148:22-147.75.109.163:39406.service. Feb 12 20:25:28.245611 sshd[2039]: Accepted publickey for core from 147.75.109.163 port 39406 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:28.248106 sshd[2039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:28.257017 systemd[1]: Started session-2.scope. Feb 12 20:25:28.257937 systemd-logind[1792]: New session 2 of user core. Feb 12 20:25:28.391335 sshd[2039]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:28.396391 systemd[1]: sshd@1-172.31.25.148:22-147.75.109.163:39406.service: Deactivated successfully. Feb 12 20:25:28.397833 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:25:28.400208 systemd-logind[1792]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:25:28.402737 systemd-logind[1792]: Removed session 2. Feb 12 20:25:28.417476 systemd[1]: Started sshd@2-172.31.25.148:22-147.75.109.163:39410.service. Feb 12 20:25:28.590346 sshd[2046]: Accepted publickey for core from 147.75.109.163 port 39410 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:28.593500 sshd[2046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:28.601828 systemd-logind[1792]: New session 3 of user core. Feb 12 20:25:28.603306 systemd[1]: Started session-3.scope. Feb 12 20:25:28.732027 sshd[2046]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:28.737760 systemd[1]: sshd@2-172.31.25.148:22-147.75.109.163:39410.service: Deactivated successfully. Feb 12 20:25:28.739219 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:25:28.741718 systemd-logind[1792]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:25:28.744075 systemd-logind[1792]: Removed session 3. Feb 12 20:25:28.756230 systemd[1]: Started sshd@3-172.31.25.148:22-147.75.109.163:39420.service. Feb 12 20:25:28.924300 sshd[2053]: Accepted publickey for core from 147.75.109.163 port 39420 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:28.927470 sshd[2053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:28.935521 systemd-logind[1792]: New session 4 of user core. Feb 12 20:25:28.936819 systemd[1]: Started session-4.scope. Feb 12 20:25:29.074260 sshd[2053]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:29.079507 systemd-logind[1792]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:25:29.079956 systemd[1]: sshd@3-172.31.25.148:22-147.75.109.163:39420.service: Deactivated successfully. Feb 12 20:25:29.081574 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:25:29.082621 systemd-logind[1792]: Removed session 4. Feb 12 20:25:29.100026 systemd[1]: Started sshd@4-172.31.25.148:22-147.75.109.163:39424.service. Feb 12 20:25:29.268887 sshd[2060]: Accepted publickey for core from 147.75.109.163 port 39424 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:29.272158 sshd[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:29.280851 systemd-logind[1792]: New session 5 of user core. Feb 12 20:25:29.281850 systemd[1]: Started session-5.scope. Feb 12 20:25:29.400855 sudo[2064]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:25:29.402018 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:25:30.062018 systemd[1]: Reloading. Feb 12 20:25:30.197558 /usr/lib/systemd/system-generators/torcx-generator[2094]: time="2024-02-12T20:25:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:25:30.198205 /usr/lib/systemd/system-generators/torcx-generator[2094]: time="2024-02-12T20:25:30Z" level=info msg="torcx already run" Feb 12 20:25:30.371293 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:30.371334 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:30.410220 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:30.596406 systemd[1]: Started kubelet.service. Feb 12 20:25:30.622821 systemd[1]: Starting coreos-metadata.service... Feb 12 20:25:30.734938 kubelet[2154]: E0212 20:25:30.734852 2154 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:25:30.739440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:25:30.739869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:25:30.805642 coreos-metadata[2162]: Feb 12 20:25:30.805 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 20:25:30.806816 coreos-metadata[2162]: Feb 12 20:25:30.806 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 12 20:25:30.807672 coreos-metadata[2162]: Feb 12 20:25:30.807 INFO Fetch successful Feb 12 20:25:30.807672 coreos-metadata[2162]: Feb 12 20:25:30.807 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 12 20:25:30.808468 coreos-metadata[2162]: Feb 12 20:25:30.808 INFO Fetch successful Feb 12 20:25:30.808554 coreos-metadata[2162]: Feb 12 20:25:30.808 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 12 20:25:30.809172 coreos-metadata[2162]: Feb 12 20:25:30.809 INFO Fetch successful Feb 12 20:25:30.809172 coreos-metadata[2162]: Feb 12 20:25:30.809 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 12 20:25:30.809863 coreos-metadata[2162]: Feb 12 20:25:30.809 INFO Fetch successful Feb 12 20:25:30.809994 coreos-metadata[2162]: Feb 12 20:25:30.809 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 12 20:25:30.810566 coreos-metadata[2162]: Feb 12 20:25:30.810 INFO Fetch successful Feb 12 20:25:30.810682 coreos-metadata[2162]: Feb 12 20:25:30.810 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 12 20:25:30.811238 coreos-metadata[2162]: Feb 12 20:25:30.811 INFO Fetch successful Feb 12 20:25:30.811321 coreos-metadata[2162]: Feb 12 20:25:30.811 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 12 20:25:30.811898 coreos-metadata[2162]: Feb 12 20:25:30.811 INFO Fetch successful Feb 12 20:25:30.811977 coreos-metadata[2162]: Feb 12 20:25:30.811 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 12 20:25:30.812686 coreos-metadata[2162]: Feb 12 20:25:30.812 INFO Fetch successful Feb 12 20:25:30.828439 systemd[1]: Finished coreos-metadata.service. Feb 12 20:25:31.252098 systemd[1]: Stopped kubelet.service. Feb 12 20:25:31.282321 systemd[1]: Reloading. Feb 12 20:25:31.401749 /usr/lib/systemd/system-generators/torcx-generator[2225]: time="2024-02-12T20:25:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:25:31.401812 /usr/lib/systemd/system-generators/torcx-generator[2225]: time="2024-02-12T20:25:31Z" level=info msg="torcx already run" Feb 12 20:25:31.575127 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:31.575416 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:31.614090 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:31.828902 systemd[1]: Started kubelet.service. Feb 12 20:25:31.916078 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:25:31.916078 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:25:31.916685 kubelet[2284]: I0212 20:25:31.916229 2284 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:25:31.918761 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:25:31.918761 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:25:33.713630 kubelet[2284]: I0212 20:25:33.713570 2284 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:25:33.713630 kubelet[2284]: I0212 20:25:33.713619 2284 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:25:33.714300 kubelet[2284]: I0212 20:25:33.714000 2284 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:25:33.717806 kubelet[2284]: I0212 20:25:33.717768 2284 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:25:33.721386 kubelet[2284]: W0212 20:25:33.721352 2284 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 20:25:33.722699 kubelet[2284]: I0212 20:25:33.722653 2284 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:25:33.723478 kubelet[2284]: I0212 20:25:33.723450 2284 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:25:33.723631 kubelet[2284]: I0212 20:25:33.723601 2284 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:25:33.723815 kubelet[2284]: I0212 20:25:33.723648 2284 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:25:33.723815 kubelet[2284]: I0212 20:25:33.723674 2284 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:25:33.723961 kubelet[2284]: I0212 20:25:33.723879 2284 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:25:33.730069 kubelet[2284]: I0212 20:25:33.730030 2284 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:25:33.730069 kubelet[2284]: I0212 20:25:33.730073 2284 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:25:33.730265 kubelet[2284]: I0212 20:25:33.730159 2284 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:25:33.730265 kubelet[2284]: I0212 20:25:33.730182 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:25:33.733638 kubelet[2284]: E0212 20:25:33.733594 2284 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:33.733972 kubelet[2284]: E0212 20:25:33.733938 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:33.736440 kubelet[2284]: I0212 20:25:33.736398 2284 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:25:33.737624 kubelet[2284]: W0212 20:25:33.737588 2284 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:25:33.738682 kubelet[2284]: I0212 20:25:33.738636 2284 server.go:1186] "Started kubelet" Feb 12 20:25:33.746320 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:25:33.746475 kubelet[2284]: E0212 20:25:33.744706 2284 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:25:33.746475 kubelet[2284]: E0212 20:25:33.744778 2284 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:25:33.746917 kubelet[2284]: I0212 20:25:33.746887 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:25:33.750520 kubelet[2284]: I0212 20:25:33.750474 2284 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:25:33.751828 kubelet[2284]: I0212 20:25:33.751777 2284 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:25:33.760315 kubelet[2284]: I0212 20:25:33.760277 2284 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:25:33.761140 kubelet[2284]: I0212 20:25:33.761092 2284 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:25:33.796406 kubelet[2284]: E0212 20:25:33.796368 2284 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.25.148" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:25:33.796865 kubelet[2284]: W0212 20:25:33.796840 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:33.797318 kubelet[2284]: E0212 20:25:33.797013 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:33.798620 kubelet[2284]: E0212 20:25:33.798472 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c0b91050", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 738586192, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 738586192, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:33.798953 kubelet[2284]: W0212 20:25:33.798912 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.25.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:33.799037 kubelet[2284]: E0212 20:25:33.798956 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.25.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:33.799037 kubelet[2284]: W0212 20:25:33.799014 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:33.799037 kubelet[2284]: E0212 20:25:33.799037 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:33.803022 kubelet[2284]: E0212 20:25:33.802898 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c1174034", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 744758836, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 744758836, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:33.862506 kubelet[2284]: I0212 20:25:33.862471 2284 kubelet_node_status.go:70] "Attempting to register node" node="172.31.25.148" Feb 12 20:25:33.866034 kubelet[2284]: E0212 20:25:33.865997 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.25.148" Feb 12 20:25:33.866392 kubelet[2284]: E0212 20:25:33.866282 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81a8789", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.25.148 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862414217, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862414217, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:33.868359 kubelet[2284]: E0212 20:25:33.868229 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81aa4d5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.25.148 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862421717, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862421717, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:33.870419 kubelet[2284]: E0212 20:25:33.870292 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81ac43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.25.148 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862429757, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862429757, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:33.873229 kubelet[2284]: I0212 20:25:33.873196 2284 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:25:33.877064 kubelet[2284]: E0212 20:25:33.876152 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81a8789", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.25.148 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862414217, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 872345753, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81a8789" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:33.877347 kubelet[2284]: I0212 20:25:33.877313 2284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:25:33.880784 kubelet[2284]: E0212 20:25:33.878418 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81aa4d5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.25.148 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862421717, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 872365577, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81aa4d5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:33.880784 kubelet[2284]: I0212 20:25:33.879789 2284 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:25:33.881057 kubelet[2284]: E0212 20:25:33.880353 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81ac43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.25.148 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862429757, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 872371097, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81ac43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:33.883703 kubelet[2284]: I0212 20:25:33.883290 2284 policy_none.go:49] "None policy: Start" Feb 12 20:25:33.885833 kubelet[2284]: I0212 20:25:33.885527 2284 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:25:33.885833 kubelet[2284]: I0212 20:25:33.885598 2284 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:25:33.910778 kubelet[2284]: I0212 20:25:33.910742 2284 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:25:33.911263 kubelet[2284]: I0212 20:25:33.911235 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:25:33.915513 kubelet[2284]: E0212 20:25:33.915464 2284 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.25.148\" not found" Feb 12 20:25:33.916519 kubelet[2284]: E0212 20:25:33.916404 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757cb312595", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 914228117, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 914228117, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:34.000752 kubelet[2284]: E0212 20:25:34.000552 2284 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.25.148" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:25:34.006228 kubelet[2284]: I0212 20:25:34.006184 2284 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:25:34.047305 kubelet[2284]: I0212 20:25:34.047244 2284 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:25:34.047505 kubelet[2284]: I0212 20:25:34.047485 2284 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:25:34.047894 kubelet[2284]: I0212 20:25:34.047871 2284 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:25:34.048144 kubelet[2284]: E0212 20:25:34.048124 2284 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:25:34.050359 kubelet[2284]: W0212 20:25:34.050321 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:34.050884 kubelet[2284]: E0212 20:25:34.050859 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:34.067699 kubelet[2284]: I0212 20:25:34.067657 2284 kubelet_node_status.go:70] "Attempting to register node" node="172.31.25.148" Feb 12 20:25:34.070097 kubelet[2284]: E0212 20:25:34.070046 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.25.148" Feb 12 20:25:34.070544 kubelet[2284]: E0212 20:25:34.070419 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81a8789", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.25.148 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862414217, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 67586918, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81a8789" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:34.072646 kubelet[2284]: E0212 20:25:34.072531 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81aa4d5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.25.148 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862421717, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 67606310, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81aa4d5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:34.144583 kubelet[2284]: E0212 20:25:34.144441 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81ac43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.25.148 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862429757, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 67611806, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81ac43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:34.403170 kubelet[2284]: E0212 20:25:34.403004 2284 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.25.148" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:25:34.471962 kubelet[2284]: I0212 20:25:34.471911 2284 kubelet_node_status.go:70] "Attempting to register node" node="172.31.25.148" Feb 12 20:25:34.476298 kubelet[2284]: E0212 20:25:34.476258 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.25.148" Feb 12 20:25:34.476561 kubelet[2284]: E0212 20:25:34.476205 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81a8789", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.25.148 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862414217, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 471828532, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81a8789" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:34.543638 kubelet[2284]: E0212 20:25:34.543476 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81aa4d5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.25.148 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862421717, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 471850552, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81aa4d5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:34.646943 kubelet[2284]: W0212 20:25:34.646904 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.25.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:34.647163 kubelet[2284]: E0212 20:25:34.647140 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.25.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:34.734921 kubelet[2284]: E0212 20:25:34.734842 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:34.745314 kubelet[2284]: E0212 20:25:34.745138 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81ac43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.25.148 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862429757, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 471874060, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81ac43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:34.801861 kubelet[2284]: W0212 20:25:34.801799 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:34.802108 kubelet[2284]: E0212 20:25:34.802081 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:35.134838 kubelet[2284]: W0212 20:25:35.134634 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:35.134838 kubelet[2284]: E0212 20:25:35.134704 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:35.213257 kubelet[2284]: E0212 20:25:35.213212 2284 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.25.148" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:25:35.278173 kubelet[2284]: I0212 20:25:35.278102 2284 kubelet_node_status.go:70] "Attempting to register node" node="172.31.25.148" Feb 12 20:25:35.279712 kubelet[2284]: E0212 20:25:35.279661 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.25.148" Feb 12 20:25:35.280246 kubelet[2284]: E0212 20:25:35.280120 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81a8789", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.25.148 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862414217, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 35, 278049436, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81a8789" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:35.282159 kubelet[2284]: E0212 20:25:35.282011 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81aa4d5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.25.148 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862421717, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 35, 278057644, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81aa4d5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:35.343247 kubelet[2284]: E0212 20:25:35.343072 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81ac43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.25.148 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862429757, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 35, 278065084, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81ac43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:35.376356 kubelet[2284]: W0212 20:25:35.376292 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:35.376356 kubelet[2284]: E0212 20:25:35.376351 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:35.735381 kubelet[2284]: E0212 20:25:35.735275 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:36.736245 kubelet[2284]: E0212 20:25:36.736200 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:36.815258 kubelet[2284]: E0212 20:25:36.815192 2284 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.25.148" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:25:36.881652 kubelet[2284]: I0212 20:25:36.881591 2284 kubelet_node_status.go:70] "Attempting to register node" node="172.31.25.148" Feb 12 20:25:36.883005 kubelet[2284]: E0212 20:25:36.882954 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.25.148" Feb 12 20:25:36.883581 kubelet[2284]: E0212 20:25:36.883462 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81a8789", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.25.148 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862414217, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 36, 881538368, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81a8789" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:36.884906 kubelet[2284]: E0212 20:25:36.884800 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81aa4d5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.25.148 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862421717, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 36, 881546456, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81aa4d5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:36.886506 kubelet[2284]: E0212 20:25:36.886373 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81ac43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.25.148 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862429757, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 36, 881554364, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81ac43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:37.318045 amazon-ssm-agent[1776]: 2024-02-12 20:25:37 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 12 20:25:37.501834 kubelet[2284]: W0212 20:25:37.501771 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:37.501988 kubelet[2284]: E0212 20:25:37.501847 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:37.581767 kubelet[2284]: W0212 20:25:37.581613 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:37.582102 kubelet[2284]: E0212 20:25:37.581934 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:37.674904 kubelet[2284]: W0212 20:25:37.674866 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.25.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:37.675136 kubelet[2284]: E0212 20:25:37.675114 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.25.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:37.737408 kubelet[2284]: E0212 20:25:37.737342 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:37.786271 kubelet[2284]: W0212 20:25:37.786217 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:37.786440 kubelet[2284]: E0212 20:25:37.786292 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:38.737782 kubelet[2284]: E0212 20:25:38.737660 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:39.738886 kubelet[2284]: E0212 20:25:39.738777 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:40.017696 kubelet[2284]: E0212 20:25:40.017317 2284 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.25.148" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:25:40.084420 kubelet[2284]: I0212 20:25:40.084001 2284 kubelet_node_status.go:70] "Attempting to register node" node="172.31.25.148" Feb 12 20:25:40.085542 kubelet[2284]: E0212 20:25:40.085409 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81a8789", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.25.148 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862414217, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 40, 83932196, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81a8789" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:40.086027 kubelet[2284]: E0212 20:25:40.085998 2284 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.25.148" Feb 12 20:25:40.086879 kubelet[2284]: E0212 20:25:40.086779 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81aa4d5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.25.148 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862421717, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 40, 83961872, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81aa4d5" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:40.088391 kubelet[2284]: E0212 20:25:40.088287 2284 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.148.17b33757c81ac43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.25.148", UID:"172.31.25.148", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.25.148 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 25, 33, 862429757, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 25, 40, 83967404, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.25.148.17b33757c81ac43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:25:40.739144 kubelet[2284]: E0212 20:25:40.739078 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:41.306104 kubelet[2284]: W0212 20:25:41.306045 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.25.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:41.306266 kubelet[2284]: E0212 20:25:41.306121 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.25.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:25:41.739620 kubelet[2284]: E0212 20:25:41.739542 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:42.002746 kubelet[2284]: W0212 20:25:42.002581 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:42.002746 kubelet[2284]: E0212 20:25:42.002633 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:25:42.194252 kubelet[2284]: W0212 20:25:42.194193 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:42.194252 kubelet[2284]: E0212 20:25:42.194250 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:25:42.273696 kubelet[2284]: W0212 20:25:42.273555 2284 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:42.273696 kubelet[2284]: E0212 20:25:42.273629 2284 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:25:42.740494 kubelet[2284]: E0212 20:25:42.740446 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:43.717037 kubelet[2284]: I0212 20:25:43.716963 2284 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 20:25:43.741689 kubelet[2284]: E0212 20:25:43.741626 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:43.915837 kubelet[2284]: E0212 20:25:43.915794 2284 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.25.148\" not found" Feb 12 20:25:44.155900 kubelet[2284]: E0212 20:25:44.155776 2284 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.25.148" not found Feb 12 20:25:44.742086 kubelet[2284]: E0212 20:25:44.742041 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:45.204680 kubelet[2284]: E0212 20:25:45.204587 2284 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.25.148" not found Feb 12 20:25:45.742936 kubelet[2284]: E0212 20:25:45.742859 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:46.423996 kubelet[2284]: E0212 20:25:46.423958 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.25.148\" not found" node="172.31.25.148" Feb 12 20:25:46.487508 kubelet[2284]: I0212 20:25:46.487476 2284 kubelet_node_status.go:70] "Attempting to register node" node="172.31.25.148" Feb 12 20:25:46.606370 kubelet[2284]: I0212 20:25:46.606314 2284 kubelet_node_status.go:73] "Successfully registered node" node="172.31.25.148" Feb 12 20:25:46.620317 kubelet[2284]: E0212 20:25:46.620261 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:46.720972 kubelet[2284]: E0212 20:25:46.720891 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:46.743587 kubelet[2284]: E0212 20:25:46.743539 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:46.821362 kubelet[2284]: E0212 20:25:46.821314 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:46.852421 sudo[2064]: pam_unix(sudo:session): session closed for user root Feb 12 20:25:46.878972 sshd[2060]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:46.885009 systemd[1]: sshd@4-172.31.25.148:22-147.75.109.163:39424.service: Deactivated successfully. Feb 12 20:25:46.886694 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:25:46.888006 systemd-logind[1792]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:25:46.892294 systemd-logind[1792]: Removed session 5. Feb 12 20:25:46.922227 kubelet[2284]: E0212 20:25:46.922163 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.022918 kubelet[2284]: E0212 20:25:47.022777 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.123198 kubelet[2284]: E0212 20:25:47.123142 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.223935 kubelet[2284]: E0212 20:25:47.223884 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.324582 kubelet[2284]: E0212 20:25:47.324455 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.425214 kubelet[2284]: E0212 20:25:47.425172 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.525896 kubelet[2284]: E0212 20:25:47.525848 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.626626 kubelet[2284]: E0212 20:25:47.626514 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.727525 kubelet[2284]: E0212 20:25:47.727463 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.744916 kubelet[2284]: E0212 20:25:47.744887 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:47.828291 kubelet[2284]: E0212 20:25:47.828261 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:47.929235 kubelet[2284]: E0212 20:25:47.929112 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:48.029809 kubelet[2284]: E0212 20:25:48.029776 2284 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.25.148\" not found" Feb 12 20:25:48.131271 kubelet[2284]: I0212 20:25:48.131221 2284 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 20:25:48.131906 env[1808]: time="2024-02-12T20:25:48.131768500Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:25:48.132919 kubelet[2284]: I0212 20:25:48.132882 2284 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 20:25:48.742199 kubelet[2284]: I0212 20:25:48.742162 2284 apiserver.go:52] "Watching apiserver" Feb 12 20:25:48.745580 kubelet[2284]: E0212 20:25:48.745521 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:48.746349 kubelet[2284]: I0212 20:25:48.745900 2284 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:48.746683 kubelet[2284]: I0212 20:25:48.746656 2284 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:48.763765 kubelet[2284]: I0212 20:25:48.763691 2284 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:25:48.855098 kubelet[2284]: I0212 20:25:48.855059 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-bpf-maps\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.855270 kubelet[2284]: I0212 20:25:48.855135 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-hostproc\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.855270 kubelet[2284]: I0212 20:25:48.855190 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-clustermesh-secrets\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.855270 kubelet[2284]: I0212 20:25:48.855238 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkwt7\" (UniqueName: \"kubernetes.io/projected/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-kube-api-access-pkwt7\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.855471 kubelet[2284]: I0212 20:25:48.855290 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f6f8aa1-893c-4070-985c-0003e00fbfea-xtables-lock\") pod \"kube-proxy-npdvv\" (UID: \"9f6f8aa1-893c-4070-985c-0003e00fbfea\") " pod="kube-system/kube-proxy-npdvv" Feb 12 20:25:48.855471 kubelet[2284]: I0212 20:25:48.855334 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-run\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.855471 kubelet[2284]: I0212 20:25:48.855380 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-cgroup\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.855471 kubelet[2284]: I0212 20:25:48.855423 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-config-path\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.855681 kubelet[2284]: I0212 20:25:48.855474 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f6f8aa1-893c-4070-985c-0003e00fbfea-kube-proxy\") pod \"kube-proxy-npdvv\" (UID: \"9f6f8aa1-893c-4070-985c-0003e00fbfea\") " pod="kube-system/kube-proxy-npdvv" Feb 12 20:25:48.855681 kubelet[2284]: I0212 20:25:48.855520 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f6f8aa1-893c-4070-985c-0003e00fbfea-lib-modules\") pod \"kube-proxy-npdvv\" (UID: \"9f6f8aa1-893c-4070-985c-0003e00fbfea\") " pod="kube-system/kube-proxy-npdvv" Feb 12 20:25:48.855681 kubelet[2284]: I0212 20:25:48.855561 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cni-path\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.855681 kubelet[2284]: I0212 20:25:48.855627 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-lib-modules\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.855681 kubelet[2284]: I0212 20:25:48.855671 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-host-proc-sys-net\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.856092 kubelet[2284]: I0212 20:25:48.855712 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-hubble-tls\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.856092 kubelet[2284]: I0212 20:25:48.855782 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk754\" (UniqueName: \"kubernetes.io/projected/9f6f8aa1-893c-4070-985c-0003e00fbfea-kube-api-access-pk754\") pod \"kube-proxy-npdvv\" (UID: \"9f6f8aa1-893c-4070-985c-0003e00fbfea\") " pod="kube-system/kube-proxy-npdvv" Feb 12 20:25:48.856092 kubelet[2284]: I0212 20:25:48.855830 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-etc-cni-netd\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.856092 kubelet[2284]: I0212 20:25:48.855873 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-xtables-lock\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.856092 kubelet[2284]: I0212 20:25:48.855919 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-host-proc-sys-kernel\") pod \"cilium-4vtn5\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " pod="kube-system/cilium-4vtn5" Feb 12 20:25:48.856092 kubelet[2284]: I0212 20:25:48.855937 2284 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:25:49.071276 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 12 20:25:49.368876 env[1808]: time="2024-02-12T20:25:49.368204346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vtn5,Uid:7a6d56cf-7b65-4c63-b0cb-b02883be6e12,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:49.656885 env[1808]: time="2024-02-12T20:25:49.656464627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-npdvv,Uid:9f6f8aa1-893c-4070-985c-0003e00fbfea,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:49.747049 kubelet[2284]: E0212 20:25:49.746983 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:50.522828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991917754.mount: Deactivated successfully. Feb 12 20:25:50.537992 env[1808]: time="2024-02-12T20:25:50.537907142Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:50.541941 env[1808]: time="2024-02-12T20:25:50.541893588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:50.546074 env[1808]: time="2024-02-12T20:25:50.546003731Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:50.551027 env[1808]: time="2024-02-12T20:25:50.550961488Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:50.552484 env[1808]: time="2024-02-12T20:25:50.552429264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:50.557415 env[1808]: time="2024-02-12T20:25:50.557367328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:50.560890 env[1808]: time="2024-02-12T20:25:50.560846052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:50.566677 env[1808]: time="2024-02-12T20:25:50.566612588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:50.609488 env[1808]: time="2024-02-12T20:25:50.609382847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:50.609789 env[1808]: time="2024-02-12T20:25:50.609707976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:50.609970 env[1808]: time="2024-02-12T20:25:50.609924614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:50.616653 env[1808]: time="2024-02-12T20:25:50.616566267Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014 pid=2380 runtime=io.containerd.runc.v2 Feb 12 20:25:50.630015 env[1808]: time="2024-02-12T20:25:50.629844487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:50.630422 env[1808]: time="2024-02-12T20:25:50.630306897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:50.630774 env[1808]: time="2024-02-12T20:25:50.630648731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:50.631512 env[1808]: time="2024-02-12T20:25:50.631383411Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ffc82155b7fda19cadfe0d7902a01870ea13b519525ce28d7e982dbfa20d676 pid=2395 runtime=io.containerd.runc.v2 Feb 12 20:25:50.720039 env[1808]: time="2024-02-12T20:25:50.719961217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vtn5,Uid:7a6d56cf-7b65-4c63-b0cb-b02883be6e12,Namespace:kube-system,Attempt:0,} returns sandbox id \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\"" Feb 12 20:25:50.724926 env[1808]: time="2024-02-12T20:25:50.724861193Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:25:50.748889 kubelet[2284]: E0212 20:25:50.748826 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:50.751374 env[1808]: time="2024-02-12T20:25:50.751315751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-npdvv,Uid:9f6f8aa1-893c-4070-985c-0003e00fbfea,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ffc82155b7fda19cadfe0d7902a01870ea13b519525ce28d7e982dbfa20d676\"" Feb 12 20:25:51.749171 kubelet[2284]: E0212 20:25:51.749103 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:52.750293 kubelet[2284]: E0212 20:25:52.750166 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:53.731230 kubelet[2284]: E0212 20:25:53.731184 2284 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:53.751374 kubelet[2284]: E0212 20:25:53.751331 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:54.752552 kubelet[2284]: E0212 20:25:54.752477 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:55.753170 kubelet[2284]: E0212 20:25:55.753088 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:56.754026 kubelet[2284]: E0212 20:25:56.753971 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:57.754269 kubelet[2284]: E0212 20:25:57.754117 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:58.226593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount688168269.mount: Deactivated successfully. Feb 12 20:25:58.754782 kubelet[2284]: E0212 20:25:58.754666 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:59.755885 kubelet[2284]: E0212 20:25:59.755815 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:00.756591 kubelet[2284]: E0212 20:26:00.756537 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:01.757828 kubelet[2284]: E0212 20:26:01.757765 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:02.758919 kubelet[2284]: E0212 20:26:02.758876 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:02.779680 env[1808]: time="2024-02-12T20:26:02.779613445Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:02.856750 env[1808]: time="2024-02-12T20:26:02.856677143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:02.901593 env[1808]: time="2024-02-12T20:26:02.900845346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:02.902360 env[1808]: time="2024-02-12T20:26:02.902290930Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 20:26:02.904210 env[1808]: time="2024-02-12T20:26:02.904158483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:26:02.910573 env[1808]: time="2024-02-12T20:26:02.910457840Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:26:03.146114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276225603.mount: Deactivated successfully. Feb 12 20:26:03.155871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210773772.mount: Deactivated successfully. Feb 12 20:26:03.505325 env[1808]: time="2024-02-12T20:26:03.505026912Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\"" Feb 12 20:26:03.506901 env[1808]: time="2024-02-12T20:26:03.506840837Z" level=info msg="StartContainer for \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\"" Feb 12 20:26:03.650638 env[1808]: time="2024-02-12T20:26:03.650579985Z" level=info msg="StartContainer for \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\" returns successfully" Feb 12 20:26:03.760690 kubelet[2284]: E0212 20:26:03.760511 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:03.903878 update_engine[1793]: I0212 20:26:03.903810 1793 update_attempter.cc:509] Updating boot flags... Feb 12 20:26:03.948623 env[1808]: time="2024-02-12T20:26:03.948517208Z" level=error msg="collecting metrics for 2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675" error="cgroups: cgroup deleted: unknown" Feb 12 20:26:04.134182 env[1808]: time="2024-02-12T20:26:04.133617737Z" level=info msg="shim disconnected" id=2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675 Feb 12 20:26:04.134182 env[1808]: time="2024-02-12T20:26:04.133686161Z" level=warning msg="cleaning up after shim disconnected" id=2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675 namespace=k8s.io Feb 12 20:26:04.134182 env[1808]: time="2024-02-12T20:26:04.133709081Z" level=info msg="cleaning up dead shim" Feb 12 20:26:04.139923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675-rootfs.mount: Deactivated successfully. Feb 12 20:26:04.180413 env[1808]: time="2024-02-12T20:26:04.180360413Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2598 runtime=io.containerd.runc.v2\n" Feb 12 20:26:04.761467 kubelet[2284]: E0212 20:26:04.761418 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:05.034342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288010790.mount: Deactivated successfully. Feb 12 20:26:05.122564 env[1808]: time="2024-02-12T20:26:05.122454637Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:26:05.149492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815996459.mount: Deactivated successfully. Feb 12 20:26:05.162800 env[1808]: time="2024-02-12T20:26:05.162689716Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\"" Feb 12 20:26:05.163688 env[1808]: time="2024-02-12T20:26:05.163642146Z" level=info msg="StartContainer for \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\"" Feb 12 20:26:05.296886 env[1808]: time="2024-02-12T20:26:05.295193149Z" level=info msg="StartContainer for \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\" returns successfully" Feb 12 20:26:05.313366 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:26:05.315103 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:26:05.315400 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:26:05.322751 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:26:05.352765 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:26:05.762281 kubelet[2284]: E0212 20:26:05.762226 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:05.925388 env[1808]: time="2024-02-12T20:26:05.925302136Z" level=info msg="shim disconnected" id=38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e Feb 12 20:26:05.925388 env[1808]: time="2024-02-12T20:26:05.925375216Z" level=warning msg="cleaning up after shim disconnected" id=38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e namespace=k8s.io Feb 12 20:26:05.925703 env[1808]: time="2024-02-12T20:26:05.925402672Z" level=info msg="cleaning up dead shim" Feb 12 20:26:05.932144 env[1808]: time="2024-02-12T20:26:05.932086518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:05.937096 env[1808]: time="2024-02-12T20:26:05.937019309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:05.939681 env[1808]: time="2024-02-12T20:26:05.939628739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:05.941132 env[1808]: time="2024-02-12T20:26:05.941067206Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2668 runtime=io.containerd.runc.v2\n" Feb 12 20:26:05.950269 env[1808]: time="2024-02-12T20:26:05.950193141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:05.951800 env[1808]: time="2024-02-12T20:26:05.951678673Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 20:26:05.955801 env[1808]: time="2024-02-12T20:26:05.955747365Z" level=info msg="CreateContainer within sandbox \"6ffc82155b7fda19cadfe0d7902a01870ea13b519525ce28d7e982dbfa20d676\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:26:05.983900 env[1808]: time="2024-02-12T20:26:05.983832610Z" level=info msg="CreateContainer within sandbox \"6ffc82155b7fda19cadfe0d7902a01870ea13b519525ce28d7e982dbfa20d676\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5b4479663875876539872ce7bcbec212435370d2d3c0e7ad691f48e1379298ab\"" Feb 12 20:26:05.984949 env[1808]: time="2024-02-12T20:26:05.984901044Z" level=info msg="StartContainer for \"5b4479663875876539872ce7bcbec212435370d2d3c0e7ad691f48e1379298ab\"" Feb 12 20:26:06.085777 env[1808]: time="2024-02-12T20:26:06.084897800Z" level=info msg="StartContainer for \"5b4479663875876539872ce7bcbec212435370d2d3c0e7ad691f48e1379298ab\" returns successfully" Feb 12 20:26:06.133972 env[1808]: time="2024-02-12T20:26:06.133917575Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:26:06.140284 kubelet[2284]: I0212 20:26:06.140246 2284 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-npdvv" podStartSLOduration=-9.223372016714632e+09 pod.CreationTimestamp="2024-02-12 20:25:46 +0000 UTC" firstStartedPulling="2024-02-12 20:25:50.760133917 +0000 UTC m=+18.924449487" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:06.138938314 +0000 UTC m=+34.303253944" watchObservedRunningTime="2024-02-12 20:26:06.140143404 +0000 UTC m=+34.304458986" Feb 12 20:26:06.146531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e-rootfs.mount: Deactivated successfully. Feb 12 20:26:06.186162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692088358.mount: Deactivated successfully. Feb 12 20:26:06.205298 env[1808]: time="2024-02-12T20:26:06.204938975Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\"" Feb 12 20:26:06.206622 env[1808]: time="2024-02-12T20:26:06.206561090Z" level=info msg="StartContainer for \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\"" Feb 12 20:26:06.325941 env[1808]: time="2024-02-12T20:26:06.325872051Z" level=info msg="StartContainer for \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\" returns successfully" Feb 12 20:26:06.417450 env[1808]: time="2024-02-12T20:26:06.417301316Z" level=info msg="shim disconnected" id=997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843 Feb 12 20:26:06.417931 env[1808]: time="2024-02-12T20:26:06.417883653Z" level=warning msg="cleaning up after shim disconnected" id=997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843 namespace=k8s.io Feb 12 20:26:06.418090 env[1808]: time="2024-02-12T20:26:06.418058698Z" level=info msg="cleaning up dead shim" Feb 12 20:26:06.433297 env[1808]: time="2024-02-12T20:26:06.433242220Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2826 runtime=io.containerd.runc.v2\n" Feb 12 20:26:06.763328 kubelet[2284]: E0212 20:26:06.763294 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:07.138715 env[1808]: time="2024-02-12T20:26:07.138446216Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:26:07.144601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843-rootfs.mount: Deactivated successfully. Feb 12 20:26:07.172884 env[1808]: time="2024-02-12T20:26:07.172823714Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\"" Feb 12 20:26:07.174113 env[1808]: time="2024-02-12T20:26:07.174065920Z" level=info msg="StartContainer for \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\"" Feb 12 20:26:07.274010 env[1808]: time="2024-02-12T20:26:07.273925021Z" level=info msg="StartContainer for \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\" returns successfully" Feb 12 20:26:07.314373 env[1808]: time="2024-02-12T20:26:07.314313138Z" level=info msg="shim disconnected" id=b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677 Feb 12 20:26:07.314818 env[1808]: time="2024-02-12T20:26:07.314758063Z" level=warning msg="cleaning up after shim disconnected" id=b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677 namespace=k8s.io Feb 12 20:26:07.314978 env[1808]: time="2024-02-12T20:26:07.314950087Z" level=info msg="cleaning up dead shim" Feb 12 20:26:07.332699 env[1808]: time="2024-02-12T20:26:07.332643772Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2928 runtime=io.containerd.runc.v2\n" Feb 12 20:26:07.347020 amazon-ssm-agent[1776]: 2024-02-12 20:26:07 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 12 20:26:07.764326 kubelet[2284]: E0212 20:26:07.764263 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:08.144654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677-rootfs.mount: Deactivated successfully. Feb 12 20:26:08.149354 env[1808]: time="2024-02-12T20:26:08.149290315Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:26:08.179052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529955533.mount: Deactivated successfully. Feb 12 20:26:08.197655 env[1808]: time="2024-02-12T20:26:08.197574009Z" level=info msg="CreateContainer within sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\"" Feb 12 20:26:08.198596 env[1808]: time="2024-02-12T20:26:08.198548278Z" level=info msg="StartContainer for \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\"" Feb 12 20:26:08.300031 env[1808]: time="2024-02-12T20:26:08.299963087Z" level=info msg="StartContainer for \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\" returns successfully" Feb 12 20:26:08.483768 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 20:26:08.499579 kubelet[2284]: I0212 20:26:08.499519 2284 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:26:08.765178 kubelet[2284]: E0212 20:26:08.765025 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:09.160762 kernel: Initializing XFRM netlink socket Feb 12 20:26:09.166766 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 20:26:09.766049 kubelet[2284]: E0212 20:26:09.765970 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:10.766885 kubelet[2284]: E0212 20:26:10.766811 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:10.976068 (udev-worker)[2510]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:26:10.984846 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 20:26:10.984913 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:26:10.976684 (udev-worker)[2520]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:26:10.980119 systemd-networkd[1592]: cilium_host: Link UP Feb 12 20:26:10.980807 systemd-networkd[1592]: cilium_net: Link UP Feb 12 20:26:10.981963 systemd-networkd[1592]: cilium_net: Gained carrier Feb 12 20:26:10.984627 systemd-networkd[1592]: cilium_host: Gained carrier Feb 12 20:26:11.143372 systemd-networkd[1592]: cilium_vxlan: Link UP Feb 12 20:26:11.143393 systemd-networkd[1592]: cilium_vxlan: Gained carrier Feb 12 20:26:11.183950 systemd-networkd[1592]: cilium_net: Gained IPv6LL Feb 12 20:26:11.240432 kubelet[2284]: I0212 20:26:11.239051 2284 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4vtn5" podStartSLOduration=-9.223372011615805e+09 pod.CreationTimestamp="2024-02-12 20:25:46 +0000 UTC" firstStartedPulling="2024-02-12 20:25:50.722530612 +0000 UTC m=+18.886846182" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:09.192067907 +0000 UTC m=+37.356383525" watchObservedRunningTime="2024-02-12 20:26:11.238972079 +0000 UTC m=+39.403287673" Feb 12 20:26:11.240432 kubelet[2284]: I0212 20:26:11.239433 2284 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:11.421621 kubelet[2284]: I0212 20:26:11.420973 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gbdq\" (UniqueName: \"kubernetes.io/projected/a95b4912-e63f-425e-946d-fb05ed33456b-kube-api-access-4gbdq\") pod \"nginx-deployment-8ffc5cf85-fqtn5\" (UID: \"a95b4912-e63f-425e-946d-fb05ed33456b\") " pod="default/nginx-deployment-8ffc5cf85-fqtn5" Feb 12 20:26:11.551778 env[1808]: time="2024-02-12T20:26:11.551677381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-fqtn5,Uid:a95b4912-e63f-425e-946d-fb05ed33456b,Namespace:default,Attempt:0,}" Feb 12 20:26:11.643855 kernel: NET: Registered PF_ALG protocol family Feb 12 20:26:11.768034 kubelet[2284]: E0212 20:26:11.767984 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:11.903996 systemd-networkd[1592]: cilium_host: Gained IPv6LL Feb 12 20:26:12.768638 kubelet[2284]: E0212 20:26:12.768597 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:12.800949 systemd-networkd[1592]: cilium_vxlan: Gained IPv6LL Feb 12 20:26:12.891221 systemd-networkd[1592]: lxc_health: Link UP Feb 12 20:26:12.911942 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:26:12.912291 systemd-networkd[1592]: lxc_health: Gained carrier Feb 12 20:26:13.647596 systemd-networkd[1592]: lxc3d6248f39293: Link UP Feb 12 20:26:13.662754 kernel: eth0: renamed from tmpa4c35 Feb 12 20:26:13.662898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3d6248f39293: link becomes ready Feb 12 20:26:13.666609 systemd-networkd[1592]: lxc3d6248f39293: Gained carrier Feb 12 20:26:13.730596 kubelet[2284]: E0212 20:26:13.730537 2284 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:13.770250 kubelet[2284]: E0212 20:26:13.770195 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:14.081440 systemd-networkd[1592]: lxc_health: Gained IPv6LL Feb 12 20:26:14.771334 kubelet[2284]: E0212 20:26:14.771264 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:15.039963 systemd-networkd[1592]: lxc3d6248f39293: Gained IPv6LL Feb 12 20:26:15.329858 kubelet[2284]: I0212 20:26:15.329609 2284 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 20:26:15.772360 kubelet[2284]: E0212 20:26:15.772313 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:16.774187 kubelet[2284]: E0212 20:26:16.774145 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:17.775909 kubelet[2284]: E0212 20:26:17.775842 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:18.776913 kubelet[2284]: E0212 20:26:18.776847 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:19.777719 kubelet[2284]: E0212 20:26:19.777648 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:20.778900 kubelet[2284]: E0212 20:26:20.778827 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:21.688211 env[1808]: time="2024-02-12T20:26:21.688082395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:21.688855 env[1808]: time="2024-02-12T20:26:21.688178407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:21.688855 env[1808]: time="2024-02-12T20:26:21.688206271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:21.689294 env[1808]: time="2024-02-12T20:26:21.689204648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4c35fd89aaba74dd68e29f388ffbd8e1cfd5779c0015157f680c4415d7bb694 pid=3437 runtime=io.containerd.runc.v2 Feb 12 20:26:21.779704 kubelet[2284]: E0212 20:26:21.779614 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:21.797446 env[1808]: time="2024-02-12T20:26:21.797386507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-fqtn5,Uid:a95b4912-e63f-425e-946d-fb05ed33456b,Namespace:default,Attempt:0,} returns sandbox id \"a4c35fd89aaba74dd68e29f388ffbd8e1cfd5779c0015157f680c4415d7bb694\"" Feb 12 20:26:21.800330 env[1808]: time="2024-02-12T20:26:21.800283477Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:26:22.780415 kubelet[2284]: E0212 20:26:22.780327 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:23.781446 kubelet[2284]: E0212 20:26:23.781375 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:24.781792 kubelet[2284]: E0212 20:26:24.781699 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:25.782391 kubelet[2284]: E0212 20:26:25.782326 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:25.922430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3978804016.mount: Deactivated successfully. Feb 12 20:26:26.783279 kubelet[2284]: E0212 20:26:26.783212 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:27.462837 env[1808]: time="2024-02-12T20:26:27.462758292Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:27.466207 env[1808]: time="2024-02-12T20:26:27.466144717Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:27.469615 env[1808]: time="2024-02-12T20:26:27.469563207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:27.472854 env[1808]: time="2024-02-12T20:26:27.472808453Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:27.474191 env[1808]: time="2024-02-12T20:26:27.474143094Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 20:26:27.478156 env[1808]: time="2024-02-12T20:26:27.478105448Z" level=info msg="CreateContainer within sandbox \"a4c35fd89aaba74dd68e29f388ffbd8e1cfd5779c0015157f680c4415d7bb694\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 20:26:27.502855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686163175.mount: Deactivated successfully. Feb 12 20:26:27.515483 env[1808]: time="2024-02-12T20:26:27.515421627Z" level=info msg="CreateContainer within sandbox \"a4c35fd89aaba74dd68e29f388ffbd8e1cfd5779c0015157f680c4415d7bb694\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"97a8ea1a53e724e06152caeaefb119a757ca8e5db580b2f1aa414c45f0c4e1e0\"" Feb 12 20:26:27.516881 env[1808]: time="2024-02-12T20:26:27.516811216Z" level=info msg="StartContainer for \"97a8ea1a53e724e06152caeaefb119a757ca8e5db580b2f1aa414c45f0c4e1e0\"" Feb 12 20:26:27.623836 env[1808]: time="2024-02-12T20:26:27.623767704Z" level=info msg="StartContainer for \"97a8ea1a53e724e06152caeaefb119a757ca8e5db580b2f1aa414c45f0c4e1e0\" returns successfully" Feb 12 20:26:27.784845 kubelet[2284]: E0212 20:26:27.784193 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:28.231037 kubelet[2284]: I0212 20:26:28.230993 2284 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-fqtn5" podStartSLOduration=-9.223372019623835e+09 pod.CreationTimestamp="2024-02-12 20:26:11 +0000 UTC" firstStartedPulling="2024-02-12 20:26:21.799362333 +0000 UTC m=+49.963677903" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:28.230555192 +0000 UTC m=+56.394870798" watchObservedRunningTime="2024-02-12 20:26:28.230941928 +0000 UTC m=+56.395257534" Feb 12 20:26:28.784417 kubelet[2284]: E0212 20:26:28.784350 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:29.785057 kubelet[2284]: E0212 20:26:29.784973 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:30.785744 kubelet[2284]: E0212 20:26:30.785675 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:31.786082 kubelet[2284]: E0212 20:26:31.786018 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:32.786742 kubelet[2284]: E0212 20:26:32.786672 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:33.731220 kubelet[2284]: E0212 20:26:33.731158 2284 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:33.787199 kubelet[2284]: E0212 20:26:33.787124 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:34.787873 kubelet[2284]: E0212 20:26:34.787820 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:35.783049 kubelet[2284]: I0212 20:26:35.782975 2284 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:35.790042 kubelet[2284]: E0212 20:26:35.789981 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:35.875601 kubelet[2284]: I0212 20:26:35.875561 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fceaa58b-01ba-4aa3-a7b2-264a79416915-data\") pod \"nfs-server-provisioner-0\" (UID: \"fceaa58b-01ba-4aa3-a7b2-264a79416915\") " pod="default/nfs-server-provisioner-0" Feb 12 20:26:35.875911 kubelet[2284]: I0212 20:26:35.875887 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fw87\" (UniqueName: \"kubernetes.io/projected/fceaa58b-01ba-4aa3-a7b2-264a79416915-kube-api-access-2fw87\") pod \"nfs-server-provisioner-0\" (UID: \"fceaa58b-01ba-4aa3-a7b2-264a79416915\") " pod="default/nfs-server-provisioner-0" Feb 12 20:26:36.099149 env[1808]: time="2024-02-12T20:26:36.098083821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fceaa58b-01ba-4aa3-a7b2-264a79416915,Namespace:default,Attempt:0,}" Feb 12 20:26:36.156938 (udev-worker)[3554]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:26:36.158011 (udev-worker)[3555]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:26:36.166137 systemd-networkd[1592]: lxc49335c320504: Link UP Feb 12 20:26:36.181863 kernel: eth0: renamed from tmp00d9b Feb 12 20:26:36.192435 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:26:36.192547 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc49335c320504: link becomes ready Feb 12 20:26:36.192819 systemd-networkd[1592]: lxc49335c320504: Gained carrier Feb 12 20:26:36.609531 env[1808]: time="2024-02-12T20:26:36.609390628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:36.609711 env[1808]: time="2024-02-12T20:26:36.609561749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:36.609711 env[1808]: time="2024-02-12T20:26:36.609654905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:36.610207 env[1808]: time="2024-02-12T20:26:36.610121273Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/00d9ba15f8141c0a4ff070e94b7203f07353cbbbb48a974e68c0c0da9d29ad9f pid=3612 runtime=io.containerd.runc.v2 Feb 12 20:26:36.725627 env[1808]: time="2024-02-12T20:26:36.725547315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fceaa58b-01ba-4aa3-a7b2-264a79416915,Namespace:default,Attempt:0,} returns sandbox id \"00d9ba15f8141c0a4ff070e94b7203f07353cbbbb48a974e68c0c0da9d29ad9f\"" Feb 12 20:26:36.728431 env[1808]: time="2024-02-12T20:26:36.728366740Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 20:26:36.791756 kubelet[2284]: E0212 20:26:36.791664 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:37.002873 systemd[1]: run-containerd-runc-k8s.io-00d9ba15f8141c0a4ff070e94b7203f07353cbbbb48a974e68c0c0da9d29ad9f-runc.K8vC7g.mount: Deactivated successfully. Feb 12 20:26:37.792052 kubelet[2284]: E0212 20:26:37.791977 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:37.952157 systemd-networkd[1592]: lxc49335c320504: Gained IPv6LL Feb 12 20:26:38.792481 kubelet[2284]: E0212 20:26:38.792418 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:39.793046 kubelet[2284]: E0212 20:26:39.792956 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:40.419488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766268683.mount: Deactivated successfully. Feb 12 20:26:40.793736 kubelet[2284]: E0212 20:26:40.793666 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:41.794479 kubelet[2284]: E0212 20:26:41.794413 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:42.794866 kubelet[2284]: E0212 20:26:42.794782 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:43.795468 kubelet[2284]: E0212 20:26:43.795413 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:44.280858 env[1808]: time="2024-02-12T20:26:44.280777903Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:44.296292 env[1808]: time="2024-02-12T20:26:44.296220130Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:44.307522 env[1808]: time="2024-02-12T20:26:44.307469304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:44.319906 env[1808]: time="2024-02-12T20:26:44.319836662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:44.320943 env[1808]: time="2024-02-12T20:26:44.320816486Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 12 20:26:44.326379 env[1808]: time="2024-02-12T20:26:44.326323275Z" level=info msg="CreateContainer within sandbox \"00d9ba15f8141c0a4ff070e94b7203f07353cbbbb48a974e68c0c0da9d29ad9f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 20:26:44.418983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750211924.mount: Deactivated successfully. Feb 12 20:26:44.431087 env[1808]: time="2024-02-12T20:26:44.430981342Z" level=info msg="CreateContainer within sandbox \"00d9ba15f8141c0a4ff070e94b7203f07353cbbbb48a974e68c0c0da9d29ad9f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bf9db4fc7f8c6db396dcab54c9a2f79d8f602df4b949c1ebb383045866c324ef\"" Feb 12 20:26:44.432259 env[1808]: time="2024-02-12T20:26:44.432184606Z" level=info msg="StartContainer for \"bf9db4fc7f8c6db396dcab54c9a2f79d8f602df4b949c1ebb383045866c324ef\"" Feb 12 20:26:44.539473 env[1808]: time="2024-02-12T20:26:44.534862648Z" level=info msg="StartContainer for \"bf9db4fc7f8c6db396dcab54c9a2f79d8f602df4b949c1ebb383045866c324ef\" returns successfully" Feb 12 20:26:44.796202 kubelet[2284]: E0212 20:26:44.796084 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:45.288431 kubelet[2284]: I0212 20:26:45.288380 2284 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372026566446e+09 pod.CreationTimestamp="2024-02-12 20:26:35 +0000 UTC" firstStartedPulling="2024-02-12 20:26:36.72780172 +0000 UTC m=+64.892117290" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:45.28783344 +0000 UTC m=+73.452149034" watchObservedRunningTime="2024-02-12 20:26:45.28833012 +0000 UTC m=+73.452645702" Feb 12 20:26:45.797971 kubelet[2284]: E0212 20:26:45.797915 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:46.798674 kubelet[2284]: E0212 20:26:46.798636 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:47.800394 kubelet[2284]: E0212 20:26:47.800326 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:48.801261 kubelet[2284]: E0212 20:26:48.801198 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:49.801701 kubelet[2284]: E0212 20:26:49.801662 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:50.803394 kubelet[2284]: E0212 20:26:50.803329 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:51.804102 kubelet[2284]: E0212 20:26:51.804040 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:52.804768 kubelet[2284]: E0212 20:26:52.804660 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:53.730840 kubelet[2284]: E0212 20:26:53.730799 2284 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:53.805174 kubelet[2284]: E0212 20:26:53.805139 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:54.226180 kubelet[2284]: I0212 20:26:54.226129 2284 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:54.407913 kubelet[2284]: I0212 20:26:54.407870 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phsql\" (UniqueName: \"kubernetes.io/projected/da45afbe-9067-4c23-92c9-2640e9b245c8-kube-api-access-phsql\") pod \"test-pod-1\" (UID: \"da45afbe-9067-4c23-92c9-2640e9b245c8\") " pod="default/test-pod-1" Feb 12 20:26:54.408210 kubelet[2284]: I0212 20:26:54.408187 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c1e5320c-4517-44dd-bd4d-fec89aa381ca\" (UniqueName: \"kubernetes.io/nfs/da45afbe-9067-4c23-92c9-2640e9b245c8-pvc-c1e5320c-4517-44dd-bd4d-fec89aa381ca\") pod \"test-pod-1\" (UID: \"da45afbe-9067-4c23-92c9-2640e9b245c8\") " pod="default/test-pod-1" Feb 12 20:26:54.551763 kernel: FS-Cache: Loaded Feb 12 20:26:54.595619 kernel: RPC: Registered named UNIX socket transport module. Feb 12 20:26:54.595839 kernel: RPC: Registered udp transport module. Feb 12 20:26:54.599960 kernel: RPC: Registered tcp transport module. Feb 12 20:26:54.600027 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 20:26:54.652773 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 20:26:54.806763 kubelet[2284]: E0212 20:26:54.806680 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:54.910527 kernel: NFS: Registering the id_resolver key type Feb 12 20:26:54.910687 kernel: Key type id_resolver registered Feb 12 20:26:54.912381 kernel: Key type id_legacy registered Feb 12 20:26:54.952800 nfsidmap[3752]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 20:26:54.958169 nfsidmap[3753]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 20:26:55.132018 env[1808]: time="2024-02-12T20:26:55.131492561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:da45afbe-9067-4c23-92c9-2640e9b245c8,Namespace:default,Attempt:0,}" Feb 12 20:26:55.184478 (udev-worker)[3738]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:26:55.185772 (udev-worker)[3750]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:26:55.189949 systemd-networkd[1592]: lxcfd476ac39225: Link UP Feb 12 20:26:55.201858 kernel: eth0: renamed from tmpc09f6 Feb 12 20:26:55.211793 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:26:55.211932 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfd476ac39225: link becomes ready Feb 12 20:26:55.212263 systemd-networkd[1592]: lxcfd476ac39225: Gained carrier Feb 12 20:26:55.658303 env[1808]: time="2024-02-12T20:26:55.657962999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:55.658303 env[1808]: time="2024-02-12T20:26:55.658031844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:55.658303 env[1808]: time="2024-02-12T20:26:55.658065313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:55.658632 env[1808]: time="2024-02-12T20:26:55.658378494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c09f6ff1999d9727f16ea5fd796709233677837eab52d8ab294e4b8f723c0c3c pid=3779 runtime=io.containerd.runc.v2 Feb 12 20:26:55.696546 systemd[1]: run-containerd-runc-k8s.io-c09f6ff1999d9727f16ea5fd796709233677837eab52d8ab294e4b8f723c0c3c-runc.udinVx.mount: Deactivated successfully. Feb 12 20:26:55.769414 env[1808]: time="2024-02-12T20:26:55.769355473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:da45afbe-9067-4c23-92c9-2640e9b245c8,Namespace:default,Attempt:0,} returns sandbox id \"c09f6ff1999d9727f16ea5fd796709233677837eab52d8ab294e4b8f723c0c3c\"" Feb 12 20:26:55.772176 env[1808]: time="2024-02-12T20:26:55.772120282Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:26:55.807642 kubelet[2284]: E0212 20:26:55.807572 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:56.160392 env[1808]: time="2024-02-12T20:26:56.160340039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:56.165068 env[1808]: time="2024-02-12T20:26:56.165021253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:56.169539 env[1808]: time="2024-02-12T20:26:56.169493339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:56.172764 env[1808]: time="2024-02-12T20:26:56.172675837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:56.174270 env[1808]: time="2024-02-12T20:26:56.174217117Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 20:26:56.177951 env[1808]: time="2024-02-12T20:26:56.177896807Z" level=info msg="CreateContainer within sandbox \"c09f6ff1999d9727f16ea5fd796709233677837eab52d8ab294e4b8f723c0c3c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 20:26:56.203146 env[1808]: time="2024-02-12T20:26:56.203088791Z" level=info msg="CreateContainer within sandbox \"c09f6ff1999d9727f16ea5fd796709233677837eab52d8ab294e4b8f723c0c3c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"27ba329e0e504cc90a4e7425f82cb4e0d0f0f44de707f7a956ffc3799e78445e\"" Feb 12 20:26:56.204294 env[1808]: time="2024-02-12T20:26:56.204243617Z" level=info msg="StartContainer for \"27ba329e0e504cc90a4e7425f82cb4e0d0f0f44de707f7a956ffc3799e78445e\"" Feb 12 20:26:56.301762 env[1808]: time="2024-02-12T20:26:56.299873157Z" level=info msg="StartContainer for \"27ba329e0e504cc90a4e7425f82cb4e0d0f0f44de707f7a956ffc3799e78445e\" returns successfully" Feb 12 20:26:56.530682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190685468.mount: Deactivated successfully. Feb 12 20:26:56.808430 kubelet[2284]: E0212 20:26:56.808287 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:57.152103 systemd-networkd[1592]: lxcfd476ac39225: Gained IPv6LL Feb 12 20:26:57.318262 kubelet[2284]: I0212 20:26:57.318216 2284 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372015536612e+09 pod.CreationTimestamp="2024-02-12 20:26:36 +0000 UTC" firstStartedPulling="2024-02-12 20:26:55.771386002 +0000 UTC m=+83.935701572" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:57.317844812 +0000 UTC m=+85.482160382" watchObservedRunningTime="2024-02-12 20:26:57.318164197 +0000 UTC m=+85.482479779" Feb 12 20:26:57.809058 kubelet[2284]: E0212 20:26:57.808993 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:58.809576 kubelet[2284]: E0212 20:26:58.809506 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:59.810671 kubelet[2284]: E0212 20:26:59.810606 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:00.811442 kubelet[2284]: E0212 20:27:00.811372 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:01.812181 kubelet[2284]: E0212 20:27:01.812118 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:02.812312 kubelet[2284]: E0212 20:27:02.812274 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:03.001011 env[1808]: time="2024-02-12T20:27:03.000922847Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:27:03.010011 env[1808]: time="2024-02-12T20:27:03.009958400Z" level=info msg="StopContainer for \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\" with timeout 1 (s)" Feb 12 20:27:03.010646 env[1808]: time="2024-02-12T20:27:03.010608665Z" level=info msg="Stop container \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\" with signal terminated" Feb 12 20:27:03.021856 systemd-networkd[1592]: lxc_health: Link DOWN Feb 12 20:27:03.021871 systemd-networkd[1592]: lxc_health: Lost carrier Feb 12 20:27:03.096812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329-rootfs.mount: Deactivated successfully. Feb 12 20:27:03.763246 env[1808]: time="2024-02-12T20:27:03.763184322Z" level=info msg="shim disconnected" id=2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329 Feb 12 20:27:03.763609 env[1808]: time="2024-02-12T20:27:03.763573271Z" level=warning msg="cleaning up after shim disconnected" id=2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329 namespace=k8s.io Feb 12 20:27:03.763778 env[1808]: time="2024-02-12T20:27:03.763748858Z" level=info msg="cleaning up dead shim" Feb 12 20:27:03.778101 env[1808]: time="2024-02-12T20:27:03.778045351Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3909 runtime=io.containerd.runc.v2\n" Feb 12 20:27:03.781458 env[1808]: time="2024-02-12T20:27:03.781406750Z" level=info msg="StopContainer for \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\" returns successfully" Feb 12 20:27:03.782719 env[1808]: time="2024-02-12T20:27:03.782647350Z" level=info msg="StopPodSandbox for \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\"" Feb 12 20:27:03.782916 env[1808]: time="2024-02-12T20:27:03.782771756Z" level=info msg="Container to stop \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:03.782916 env[1808]: time="2024-02-12T20:27:03.782805644Z" level=info msg="Container to stop \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:03.782916 env[1808]: time="2024-02-12T20:27:03.782837133Z" level=info msg="Container to stop \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:03.782916 env[1808]: time="2024-02-12T20:27:03.782864337Z" level=info msg="Container to stop \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:03.782916 env[1808]: time="2024-02-12T20:27:03.782892238Z" level=info msg="Container to stop \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:03.786223 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014-shm.mount: Deactivated successfully. Feb 12 20:27:03.814015 kubelet[2284]: E0212 20:27:03.813938 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:03.831975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014-rootfs.mount: Deactivated successfully. Feb 12 20:27:03.845225 env[1808]: time="2024-02-12T20:27:03.845146140Z" level=info msg="shim disconnected" id=18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014 Feb 12 20:27:03.845225 env[1808]: time="2024-02-12T20:27:03.845221297Z" level=warning msg="cleaning up after shim disconnected" id=18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014 namespace=k8s.io Feb 12 20:27:03.845555 env[1808]: time="2024-02-12T20:27:03.845244073Z" level=info msg="cleaning up dead shim" Feb 12 20:27:03.858934 env[1808]: time="2024-02-12T20:27:03.858864909Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3940 runtime=io.containerd.runc.v2\n" Feb 12 20:27:03.859491 env[1808]: time="2024-02-12T20:27:03.859441409Z" level=info msg="TearDown network for sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" successfully" Feb 12 20:27:03.859605 env[1808]: time="2024-02-12T20:27:03.859490562Z" level=info msg="StopPodSandbox for \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" returns successfully" Feb 12 20:27:03.932247 kubelet[2284]: E0212 20:27:03.932181 2284 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:27:03.965841 kubelet[2284]: I0212 20:27:03.965781 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.966027 kubelet[2284]: I0212 20:27:03.965682 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-host-proc-sys-kernel\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.966027 kubelet[2284]: I0212 20:27:03.965949 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-bpf-maps\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.966161 kubelet[2284]: I0212 20:27:03.966021 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.966161 kubelet[2284]: I0212 20:27:03.966112 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkwt7\" (UniqueName: \"kubernetes.io/projected/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-kube-api-access-pkwt7\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.970774 kubelet[2284]: I0212 20:27:03.966786 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-run\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.970774 kubelet[2284]: I0212 20:27:03.966869 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-etc-cni-netd\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.970774 kubelet[2284]: I0212 20:27:03.966982 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-hostproc\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.970774 kubelet[2284]: I0212 20:27:03.967055 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-clustermesh-secrets\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.970774 kubelet[2284]: I0212 20:27:03.967136 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cni-path\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.970774 kubelet[2284]: I0212 20:27:03.967159 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.971241 kubelet[2284]: I0212 20:27:03.967202 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-lib-modules\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.971241 kubelet[2284]: I0212 20:27:03.967214 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.971241 kubelet[2284]: I0212 20:27:03.967250 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-host-proc-sys-net\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.971241 kubelet[2284]: I0212 20:27:03.967316 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-xtables-lock\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.971241 kubelet[2284]: I0212 20:27:03.967388 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-config-path\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.971241 kubelet[2284]: I0212 20:27:03.967433 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-cgroup\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.971602 kubelet[2284]: I0212 20:27:03.967500 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-hubble-tls\") pod \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\" (UID: \"7a6d56cf-7b65-4c63-b0cb-b02883be6e12\") " Feb 12 20:27:03.971602 kubelet[2284]: I0212 20:27:03.967581 2284 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-bpf-maps\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:03.971602 kubelet[2284]: I0212 20:27:03.967609 2284 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-run\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:03.971602 kubelet[2284]: I0212 20:27:03.967659 2284 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-etc-cni-netd\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:03.971602 kubelet[2284]: I0212 20:27:03.967687 2284 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-host-proc-sys-kernel\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:03.971602 kubelet[2284]: I0212 20:27:03.968191 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cni-path" (OuterVolumeSpecName: "cni-path") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.971602 kubelet[2284]: I0212 20:27:03.968246 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.972056 kubelet[2284]: I0212 20:27:03.968318 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.972056 kubelet[2284]: I0212 20:27:03.968388 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.972056 kubelet[2284]: W0212 20:27:03.968778 2284 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7a6d56cf-7b65-4c63-b0cb-b02883be6e12/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:27:03.974677 kubelet[2284]: I0212 20:27:03.974627 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-hostproc" (OuterVolumeSpecName: "hostproc") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.975032 kubelet[2284]: I0212 20:27:03.974978 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:03.975164 kubelet[2284]: I0212 20:27:03.975074 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:27:03.978561 systemd[1]: var-lib-kubelet-pods-7a6d56cf\x2d7b65\x2d4c63\x2db0cb\x2db02883be6e12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpkwt7.mount: Deactivated successfully. Feb 12 20:27:03.980508 kubelet[2284]: I0212 20:27:03.980443 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-kube-api-access-pkwt7" (OuterVolumeSpecName: "kube-api-access-pkwt7") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "kube-api-access-pkwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:03.987195 systemd[1]: var-lib-kubelet-pods-7a6d56cf\x2d7b65\x2d4c63\x2db0cb\x2db02883be6e12-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:27:03.991275 systemd[1]: var-lib-kubelet-pods-7a6d56cf\x2d7b65\x2d4c63\x2db0cb\x2db02883be6e12-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:27:03.991622 kubelet[2284]: I0212 20:27:03.991573 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:27:03.992482 kubelet[2284]: I0212 20:27:03.992415 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7a6d56cf-7b65-4c63-b0cb-b02883be6e12" (UID: "7a6d56cf-7b65-4c63-b0cb-b02883be6e12"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:04.069785 kubelet[2284]: I0212 20:27:04.068684 2284 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-lib-modules\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.069785 kubelet[2284]: I0212 20:27:04.068755 2284 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-host-proc-sys-net\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.069785 kubelet[2284]: I0212 20:27:04.068783 2284 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-xtables-lock\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.069785 kubelet[2284]: I0212 20:27:04.068806 2284 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-hostproc\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.069785 kubelet[2284]: I0212 20:27:04.068831 2284 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-clustermesh-secrets\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.069785 kubelet[2284]: I0212 20:27:04.068858 2284 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cni-path\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.069785 kubelet[2284]: I0212 20:27:04.068883 2284 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-config-path\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.069785 kubelet[2284]: I0212 20:27:04.068907 2284 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-cilium-cgroup\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.070347 kubelet[2284]: I0212 20:27:04.068929 2284 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-hubble-tls\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.070347 kubelet[2284]: I0212 20:27:04.068956 2284 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-pkwt7\" (UniqueName: \"kubernetes.io/projected/7a6d56cf-7b65-4c63-b0cb-b02883be6e12-kube-api-access-pkwt7\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:04.328284 kubelet[2284]: I0212 20:27:04.328151 2284 scope.go:115] "RemoveContainer" containerID="2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329" Feb 12 20:27:04.332074 env[1808]: time="2024-02-12T20:27:04.331982969Z" level=info msg="RemoveContainer for \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\"" Feb 12 20:27:04.340948 env[1808]: time="2024-02-12T20:27:04.340869921Z" level=info msg="RemoveContainer for \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\" returns successfully" Feb 12 20:27:04.341557 kubelet[2284]: I0212 20:27:04.341517 2284 scope.go:115] "RemoveContainer" containerID="b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677" Feb 12 20:27:04.343771 env[1808]: time="2024-02-12T20:27:04.343315504Z" level=info msg="RemoveContainer for \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\"" Feb 12 20:27:04.348660 env[1808]: time="2024-02-12T20:27:04.348606070Z" level=info msg="RemoveContainer for \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\" returns successfully" Feb 12 20:27:04.349210 kubelet[2284]: I0212 20:27:04.349171 2284 scope.go:115] "RemoveContainer" containerID="997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843" Feb 12 20:27:04.350941 env[1808]: time="2024-02-12T20:27:04.350894643Z" level=info msg="RemoveContainer for \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\"" Feb 12 20:27:04.356400 env[1808]: time="2024-02-12T20:27:04.356300819Z" level=info msg="RemoveContainer for \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\" returns successfully" Feb 12 20:27:04.356896 kubelet[2284]: I0212 20:27:04.356860 2284 scope.go:115] "RemoveContainer" containerID="38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e" Feb 12 20:27:04.358788 env[1808]: time="2024-02-12T20:27:04.358714398Z" level=info msg="RemoveContainer for \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\"" Feb 12 20:27:04.365086 env[1808]: time="2024-02-12T20:27:04.365012905Z" level=info msg="RemoveContainer for \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\" returns successfully" Feb 12 20:27:04.365615 kubelet[2284]: I0212 20:27:04.365571 2284 scope.go:115] "RemoveContainer" containerID="2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675" Feb 12 20:27:04.367555 env[1808]: time="2024-02-12T20:27:04.367506969Z" level=info msg="RemoveContainer for \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\"" Feb 12 20:27:04.372958 env[1808]: time="2024-02-12T20:27:04.372902765Z" level=info msg="RemoveContainer for \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\" returns successfully" Feb 12 20:27:04.374747 kubelet[2284]: I0212 20:27:04.374534 2284 scope.go:115] "RemoveContainer" containerID="2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329" Feb 12 20:27:04.375040 env[1808]: time="2024-02-12T20:27:04.374914746Z" level=error msg="ContainerStatus for \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\": not found" Feb 12 20:27:04.375293 kubelet[2284]: E0212 20:27:04.375260 2284 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\": not found" containerID="2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329" Feb 12 20:27:04.375405 kubelet[2284]: I0212 20:27:04.375329 2284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329} err="failed to get container status \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\": rpc error: code = NotFound desc = an error occurred when try to find container \"2dd27b222b5f574d0e3b8e6c695107b33592b9a05b56991d96c6584cf8846329\": not found" Feb 12 20:27:04.375405 kubelet[2284]: I0212 20:27:04.375355 2284 scope.go:115] "RemoveContainer" containerID="b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677" Feb 12 20:27:04.375718 env[1808]: time="2024-02-12T20:27:04.375626679Z" level=error msg="ContainerStatus for \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\": not found" Feb 12 20:27:04.376042 kubelet[2284]: E0212 20:27:04.375998 2284 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\": not found" containerID="b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677" Feb 12 20:27:04.376133 kubelet[2284]: I0212 20:27:04.376059 2284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677} err="failed to get container status \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1473a279786c6ea07a9f7672a74ce1faf4cfa07f7a29e11f5f2424036726677\": not found" Feb 12 20:27:04.376133 kubelet[2284]: I0212 20:27:04.376087 2284 scope.go:115] "RemoveContainer" containerID="997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843" Feb 12 20:27:04.376449 env[1808]: time="2024-02-12T20:27:04.376366680Z" level=error msg="ContainerStatus for \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\": not found" Feb 12 20:27:04.377154 kubelet[2284]: E0212 20:27:04.376899 2284 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\": not found" containerID="997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843" Feb 12 20:27:04.377154 kubelet[2284]: I0212 20:27:04.376955 2284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843} err="failed to get container status \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\": rpc error: code = NotFound desc = an error occurred when try to find container \"997362882860578bee8d3b45188983ca1a063dfa2c900edeb624ea2735704843\": not found" Feb 12 20:27:04.377154 kubelet[2284]: I0212 20:27:04.377002 2284 scope.go:115] "RemoveContainer" containerID="38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e" Feb 12 20:27:04.377832 env[1808]: time="2024-02-12T20:27:04.377713877Z" level=error msg="ContainerStatus for \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\": not found" Feb 12 20:27:04.378201 kubelet[2284]: E0212 20:27:04.378169 2284 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\": not found" containerID="38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e" Feb 12 20:27:04.378304 kubelet[2284]: I0212 20:27:04.378227 2284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e} err="failed to get container status \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\": rpc error: code = NotFound desc = an error occurred when try to find container \"38dc4e5b35ea0c2606e32bc85c85db07593110328c3c43674967e3f62316e95e\": not found" Feb 12 20:27:04.378304 kubelet[2284]: I0212 20:27:04.378250 2284 scope.go:115] "RemoveContainer" containerID="2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675" Feb 12 20:27:04.378573 env[1808]: time="2024-02-12T20:27:04.378494907Z" level=error msg="ContainerStatus for \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\": not found" Feb 12 20:27:04.378810 kubelet[2284]: E0212 20:27:04.378778 2284 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\": not found" containerID="2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675" Feb 12 20:27:04.378931 kubelet[2284]: I0212 20:27:04.378833 2284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675} err="failed to get container status \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\": rpc error: code = NotFound desc = an error occurred when try to find container \"2996d12314005bba9ac6f985129cefa22fd180c249852e510a5d3209b7cef675\": not found" Feb 12 20:27:04.814356 kubelet[2284]: E0212 20:27:04.814296 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:05.815032 kubelet[2284]: E0212 20:27:05.814964 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:06.053227 kubelet[2284]: I0212 20:27:06.052073 2284 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7a6d56cf-7b65-4c63-b0cb-b02883be6e12 path="/var/lib/kubelet/pods/7a6d56cf-7b65-4c63-b0cb-b02883be6e12/volumes" Feb 12 20:27:06.419132 kubelet[2284]: I0212 20:27:06.419082 2284 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:27:06.419336 kubelet[2284]: E0212 20:27:06.419165 2284 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a6d56cf-7b65-4c63-b0cb-b02883be6e12" containerName="mount-cgroup" Feb 12 20:27:06.419336 kubelet[2284]: E0212 20:27:06.419188 2284 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a6d56cf-7b65-4c63-b0cb-b02883be6e12" containerName="clean-cilium-state" Feb 12 20:27:06.419336 kubelet[2284]: E0212 20:27:06.419206 2284 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a6d56cf-7b65-4c63-b0cb-b02883be6e12" containerName="cilium-agent" Feb 12 20:27:06.419336 kubelet[2284]: E0212 20:27:06.419224 2284 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a6d56cf-7b65-4c63-b0cb-b02883be6e12" containerName="apply-sysctl-overwrites" Feb 12 20:27:06.419336 kubelet[2284]: E0212 20:27:06.419241 2284 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a6d56cf-7b65-4c63-b0cb-b02883be6e12" containerName="mount-bpf-fs" Feb 12 20:27:06.419336 kubelet[2284]: I0212 20:27:06.419285 2284 memory_manager.go:346] "RemoveStaleState removing state" podUID="7a6d56cf-7b65-4c63-b0cb-b02883be6e12" containerName="cilium-agent" Feb 12 20:27:06.461323 kubelet[2284]: I0212 20:27:06.461255 2284 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:27:06.465160 kubelet[2284]: W0212 20:27:06.465127 2284 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.25.148" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.25.148' and this object Feb 12 20:27:06.465420 kubelet[2284]: E0212 20:27:06.465397 2284 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.25.148" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.25.148' and this object Feb 12 20:27:06.583909 kubelet[2284]: I0212 20:27:06.583870 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-run\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.584141 kubelet[2284]: I0212 20:27:06.584119 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-clustermesh-secrets\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.584310 kubelet[2284]: I0212 20:27:06.584290 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-ipsec-secrets\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.584461 kubelet[2284]: I0212 20:27:06.584440 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-host-proc-sys-kernel\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.584612 kubelet[2284]: I0212 20:27:06.584592 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cni-path\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.584856 kubelet[2284]: I0212 20:27:06.584818 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-host-proc-sys-net\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.584937 kubelet[2284]: I0212 20:27:06.584921 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8384779c-ed83-422f-b2b5-cbfa316321fc-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-cpx8p\" (UID: \"8384779c-ed83-422f-b2b5-cbfa316321fc\") " pod="kube-system/cilium-operator-f59cbd8c6-cpx8p" Feb 12 20:27:06.585025 kubelet[2284]: I0212 20:27:06.584997 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdxx\" (UniqueName: \"kubernetes.io/projected/8384779c-ed83-422f-b2b5-cbfa316321fc-kube-api-access-lpdxx\") pod \"cilium-operator-f59cbd8c6-cpx8p\" (UID: \"8384779c-ed83-422f-b2b5-cbfa316321fc\") " pod="kube-system/cilium-operator-f59cbd8c6-cpx8p" Feb 12 20:27:06.585128 kubelet[2284]: I0212 20:27:06.585085 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-bpf-maps\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.585201 kubelet[2284]: I0212 20:27:06.585133 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-hubble-tls\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.585278 kubelet[2284]: I0212 20:27:06.585202 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhps\" (UniqueName: \"kubernetes.io/projected/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-kube-api-access-xdhps\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.585278 kubelet[2284]: I0212 20:27:06.585272 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-config-path\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.585401 kubelet[2284]: I0212 20:27:06.585351 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-etc-cni-netd\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.585465 kubelet[2284]: I0212 20:27:06.585401 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-xtables-lock\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.585526 kubelet[2284]: I0212 20:27:06.585472 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-hostproc\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.585596 kubelet[2284]: I0212 20:27:06.585540 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-cgroup\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.585660 kubelet[2284]: I0212 20:27:06.585608 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-lib-modules\") pod \"cilium-fj575\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " pod="kube-system/cilium-fj575" Feb 12 20:27:06.769614 env[1808]: time="2024-02-12T20:27:06.769530743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-cpx8p,Uid:8384779c-ed83-422f-b2b5-cbfa316321fc,Namespace:kube-system,Attempt:0,}" Feb 12 20:27:06.800444 env[1808]: time="2024-02-12T20:27:06.800312214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:06.800444 env[1808]: time="2024-02-12T20:27:06.800388979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:06.800858 env[1808]: time="2024-02-12T20:27:06.800416448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:06.801426 env[1808]: time="2024-02-12T20:27:06.801334807Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/774249ec6072a58ac2aad36eae5b55ba1883c0dc36e7fcd7b1aa6be7102bb715 pid=3971 runtime=io.containerd.runc.v2 Feb 12 20:27:06.815511 kubelet[2284]: E0212 20:27:06.815402 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:06.896366 env[1808]: time="2024-02-12T20:27:06.896312604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-cpx8p,Uid:8384779c-ed83-422f-b2b5-cbfa316321fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"774249ec6072a58ac2aad36eae5b55ba1883c0dc36e7fcd7b1aa6be7102bb715\"" Feb 12 20:27:06.899230 env[1808]: time="2024-02-12T20:27:06.899177915Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:27:07.325293 env[1808]: time="2024-02-12T20:27:07.325222246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fj575,Uid:4f5006c5-4c12-4fe3-99eb-e29caaccf0ce,Namespace:kube-system,Attempt:0,}" Feb 12 20:27:07.349474 env[1808]: time="2024-02-12T20:27:07.349335194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:07.349474 env[1808]: time="2024-02-12T20:27:07.349414119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:07.349798 env[1808]: time="2024-02-12T20:27:07.349441647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:07.350426 env[1808]: time="2024-02-12T20:27:07.350318449Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d pid=4013 runtime=io.containerd.runc.v2 Feb 12 20:27:07.419415 env[1808]: time="2024-02-12T20:27:07.419353991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fj575,Uid:4f5006c5-4c12-4fe3-99eb-e29caaccf0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\"" Feb 12 20:27:07.424049 env[1808]: time="2024-02-12T20:27:07.423996317Z" level=info msg="CreateContainer within sandbox \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:27:07.447220 env[1808]: time="2024-02-12T20:27:07.447132178Z" level=info msg="CreateContainer within sandbox \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09b321ff9e3986efdff83e26f58131a9e4c0f5bfa5b1ca77574005508de24032\"" Feb 12 20:27:07.448111 env[1808]: time="2024-02-12T20:27:07.448065153Z" level=info msg="StartContainer for \"09b321ff9e3986efdff83e26f58131a9e4c0f5bfa5b1ca77574005508de24032\"" Feb 12 20:27:07.553507 env[1808]: time="2024-02-12T20:27:07.552919791Z" level=info msg="StartContainer for \"09b321ff9e3986efdff83e26f58131a9e4c0f5bfa5b1ca77574005508de24032\" returns successfully" Feb 12 20:27:07.614246 env[1808]: time="2024-02-12T20:27:07.614094738Z" level=info msg="shim disconnected" id=09b321ff9e3986efdff83e26f58131a9e4c0f5bfa5b1ca77574005508de24032 Feb 12 20:27:07.614752 env[1808]: time="2024-02-12T20:27:07.614692141Z" level=warning msg="cleaning up after shim disconnected" id=09b321ff9e3986efdff83e26f58131a9e4c0f5bfa5b1ca77574005508de24032 namespace=k8s.io Feb 12 20:27:07.614908 env[1808]: time="2024-02-12T20:27:07.614878431Z" level=info msg="cleaning up dead shim" Feb 12 20:27:07.627491 env[1808]: time="2024-02-12T20:27:07.627436433Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4096 runtime=io.containerd.runc.v2\n" Feb 12 20:27:07.815713 kubelet[2284]: E0212 20:27:07.815616 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:08.159788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541980996.mount: Deactivated successfully. Feb 12 20:27:08.343859 env[1808]: time="2024-02-12T20:27:08.343805712Z" level=info msg="StopPodSandbox for \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\"" Feb 12 20:27:08.344563 env[1808]: time="2024-02-12T20:27:08.344521449Z" level=info msg="Container to stop \"09b321ff9e3986efdff83e26f58131a9e4c0f5bfa5b1ca77574005508de24032\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:08.348259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d-shm.mount: Deactivated successfully. Feb 12 20:27:08.356282 kubelet[2284]: I0212 20:27:08.354861 2284 setters.go:548] "Node became not ready" node="172.31.25.148" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:27:08.354799985 +0000 UTC m=+96.519115567 LastTransitionTime:2024-02-12 20:27:08.354799985 +0000 UTC m=+96.519115567 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:27:08.403968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d-rootfs.mount: Deactivated successfully. Feb 12 20:27:08.433492 env[1808]: time="2024-02-12T20:27:08.433334213Z" level=info msg="shim disconnected" id=f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d Feb 12 20:27:08.433492 env[1808]: time="2024-02-12T20:27:08.433404474Z" level=warning msg="cleaning up after shim disconnected" id=f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d namespace=k8s.io Feb 12 20:27:08.433492 env[1808]: time="2024-02-12T20:27:08.433427850Z" level=info msg="cleaning up dead shim" Feb 12 20:27:08.450195 env[1808]: time="2024-02-12T20:27:08.450108507Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4130 runtime=io.containerd.runc.v2\n" Feb 12 20:27:08.450753 env[1808]: time="2024-02-12T20:27:08.450663777Z" level=info msg="TearDown network for sandbox \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" successfully" Feb 12 20:27:08.450877 env[1808]: time="2024-02-12T20:27:08.450717142Z" level=info msg="StopPodSandbox for \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" returns successfully" Feb 12 20:27:08.613777 kubelet[2284]: I0212 20:27:08.609391 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdhps\" (UniqueName: \"kubernetes.io/projected/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-kube-api-access-xdhps\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.613777 kubelet[2284]: I0212 20:27:08.609462 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-config-path\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.613777 kubelet[2284]: I0212 20:27:08.609502 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-cgroup\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.613777 kubelet[2284]: I0212 20:27:08.609544 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-run\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.613777 kubelet[2284]: I0212 20:27:08.609584 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-etc-cni-netd\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.613777 kubelet[2284]: I0212 20:27:08.609627 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-hostproc\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614303 kubelet[2284]: I0212 20:27:08.609669 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-hubble-tls\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614303 kubelet[2284]: I0212 20:27:08.609719 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-clustermesh-secrets\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614303 kubelet[2284]: I0212 20:27:08.609781 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-ipsec-secrets\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614303 kubelet[2284]: I0212 20:27:08.609820 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-host-proc-sys-net\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614303 kubelet[2284]: I0212 20:27:08.609868 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-host-proc-sys-kernel\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614303 kubelet[2284]: I0212 20:27:08.609913 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-lib-modules\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614673 kubelet[2284]: I0212 20:27:08.609950 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cni-path\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614673 kubelet[2284]: I0212 20:27:08.609989 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-bpf-maps\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614673 kubelet[2284]: I0212 20:27:08.610030 2284 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-xtables-lock\") pod \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\" (UID: \"4f5006c5-4c12-4fe3-99eb-e29caaccf0ce\") " Feb 12 20:27:08.614673 kubelet[2284]: I0212 20:27:08.610111 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.614673 kubelet[2284]: W0212 20:27:08.611071 2284 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:27:08.619299 kubelet[2284]: I0212 20:27:08.619244 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:27:08.621646 kubelet[2284]: I0212 20:27:08.621599 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.621896 kubelet[2284]: I0212 20:27:08.621869 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.622073 kubelet[2284]: I0212 20:27:08.622048 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.622225 kubelet[2284]: I0212 20:27:08.622201 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-hostproc" (OuterVolumeSpecName: "hostproc") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.622776 kubelet[2284]: I0212 20:27:08.622706 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.622895 kubelet[2284]: I0212 20:27:08.622791 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.622895 kubelet[2284]: I0212 20:27:08.622834 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.622895 kubelet[2284]: I0212 20:27:08.622873 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cni-path" (OuterVolumeSpecName: "cni-path") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.623093 kubelet[2284]: I0212 20:27:08.622912 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:08.624007 kubelet[2284]: I0212 20:27:08.623948 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-kube-api-access-xdhps" (OuterVolumeSpecName: "kube-api-access-xdhps") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "kube-api-access-xdhps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:08.636937 kubelet[2284]: I0212 20:27:08.636888 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:08.638964 kubelet[2284]: I0212 20:27:08.638899 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:27:08.639659 kubelet[2284]: I0212 20:27:08.639595 2284 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" (UID: "4f5006c5-4c12-4fe3-99eb-e29caaccf0ce"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:27:08.695184 systemd[1]: var-lib-kubelet-pods-4f5006c5\x2d4c12\x2d4fe3\x2d99eb\x2de29caaccf0ce-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:27:08.695462 systemd[1]: var-lib-kubelet-pods-4f5006c5\x2d4c12\x2d4fe3\x2d99eb\x2de29caaccf0ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxdhps.mount: Deactivated successfully. Feb 12 20:27:08.695682 systemd[1]: var-lib-kubelet-pods-4f5006c5\x2d4c12\x2d4fe3\x2d99eb\x2de29caaccf0ce-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:27:08.695931 systemd[1]: var-lib-kubelet-pods-4f5006c5\x2d4c12\x2d4fe3\x2d99eb\x2de29caaccf0ce-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:27:08.710802 kubelet[2284]: I0212 20:27:08.710759 2284 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-lib-modules\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.711034 kubelet[2284]: I0212 20:27:08.711013 2284 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cni-path\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.711173 kubelet[2284]: I0212 20:27:08.711152 2284 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-bpf-maps\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.711307 kubelet[2284]: I0212 20:27:08.711283 2284 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-xtables-lock\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.711444 kubelet[2284]: I0212 20:27:08.711424 2284 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-etc-cni-netd\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.711576 kubelet[2284]: I0212 20:27:08.711556 2284 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-xdhps\" (UniqueName: \"kubernetes.io/projected/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-kube-api-access-xdhps\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.711700 kubelet[2284]: I0212 20:27:08.711681 2284 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-config-path\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.712112 kubelet[2284]: I0212 20:27:08.712090 2284 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-cgroup\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.712321 kubelet[2284]: I0212 20:27:08.712299 2284 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-run\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.712539 kubelet[2284]: I0212 20:27:08.712517 2284 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-hostproc\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.712765 kubelet[2284]: I0212 20:27:08.712744 2284 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-hubble-tls\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.712884 kubelet[2284]: I0212 20:27:08.712865 2284 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-clustermesh-secrets\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.713003 kubelet[2284]: I0212 20:27:08.712984 2284 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-host-proc-sys-kernel\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.713154 kubelet[2284]: I0212 20:27:08.713133 2284 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-cilium-ipsec-secrets\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.713272 kubelet[2284]: I0212 20:27:08.713253 2284 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce-host-proc-sys-net\") on node \"172.31.25.148\" DevicePath \"\"" Feb 12 20:27:08.816524 kubelet[2284]: E0212 20:27:08.816463 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:08.933959 kubelet[2284]: E0212 20:27:08.933928 2284 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:27:09.224502 env[1808]: time="2024-02-12T20:27:09.224445005Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:09.227767 env[1808]: time="2024-02-12T20:27:09.227688677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:09.230997 env[1808]: time="2024-02-12T20:27:09.230932877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:27:09.232262 env[1808]: time="2024-02-12T20:27:09.232186698Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 20:27:09.236207 env[1808]: time="2024-02-12T20:27:09.236155154Z" level=info msg="CreateContainer within sandbox \"774249ec6072a58ac2aad36eae5b55ba1883c0dc36e7fcd7b1aa6be7102bb715\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:27:09.255264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875187979.mount: Deactivated successfully. Feb 12 20:27:09.269054 env[1808]: time="2024-02-12T20:27:09.268967716Z" level=info msg="CreateContainer within sandbox \"774249ec6072a58ac2aad36eae5b55ba1883c0dc36e7fcd7b1aa6be7102bb715\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4c569fdd8e567dc495d8cdfe7471ba26cf8907077621f811235636b1c948f583\"" Feb 12 20:27:09.270183 env[1808]: time="2024-02-12T20:27:09.270051015Z" level=info msg="StartContainer for \"4c569fdd8e567dc495d8cdfe7471ba26cf8907077621f811235636b1c948f583\"" Feb 12 20:27:09.352620 kubelet[2284]: I0212 20:27:09.352586 2284 scope.go:115] "RemoveContainer" containerID="09b321ff9e3986efdff83e26f58131a9e4c0f5bfa5b1ca77574005508de24032" Feb 12 20:27:09.361277 env[1808]: time="2024-02-12T20:27:09.361200120Z" level=info msg="RemoveContainer for \"09b321ff9e3986efdff83e26f58131a9e4c0f5bfa5b1ca77574005508de24032\"" Feb 12 20:27:09.379917 env[1808]: time="2024-02-12T20:27:09.379824085Z" level=info msg="RemoveContainer for \"09b321ff9e3986efdff83e26f58131a9e4c0f5bfa5b1ca77574005508de24032\" returns successfully" Feb 12 20:27:09.394521 env[1808]: time="2024-02-12T20:27:09.394432158Z" level=info msg="StartContainer for \"4c569fdd8e567dc495d8cdfe7471ba26cf8907077621f811235636b1c948f583\" returns successfully" Feb 12 20:27:09.409276 kubelet[2284]: I0212 20:27:09.408369 2284 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:27:09.409276 kubelet[2284]: E0212 20:27:09.408457 2284 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" containerName="mount-cgroup" Feb 12 20:27:09.409276 kubelet[2284]: I0212 20:27:09.408506 2284 memory_manager.go:346] "RemoveStaleState removing state" podUID="4f5006c5-4c12-4fe3-99eb-e29caaccf0ce" containerName="mount-cgroup" Feb 12 20:27:09.527055 kubelet[2284]: I0212 20:27:09.526897 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-xtables-lock\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.528642 kubelet[2284]: I0212 20:27:09.527372 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-cilium-run\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.528642 kubelet[2284]: I0212 20:27:09.527442 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-cilium-cgroup\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.528642 kubelet[2284]: I0212 20:27:09.527491 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-cilium-ipsec-secrets\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.528642 kubelet[2284]: I0212 20:27:09.527539 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-hubble-tls\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.528642 kubelet[2284]: I0212 20:27:09.527587 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-host-proc-sys-net\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.528642 kubelet[2284]: I0212 20:27:09.527633 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-host-proc-sys-kernel\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.529124 kubelet[2284]: I0212 20:27:09.527680 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbgh5\" (UniqueName: \"kubernetes.io/projected/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-kube-api-access-sbgh5\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.529124 kubelet[2284]: I0212 20:27:09.527747 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-lib-modules\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.529124 kubelet[2284]: I0212 20:27:09.527798 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-clustermesh-secrets\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.529124 kubelet[2284]: I0212 20:27:09.527845 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-cilium-config-path\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.529124 kubelet[2284]: I0212 20:27:09.527888 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-bpf-maps\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.529124 kubelet[2284]: I0212 20:27:09.527930 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-hostproc\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.529487 kubelet[2284]: I0212 20:27:09.527973 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-cni-path\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.529487 kubelet[2284]: I0212 20:27:09.528016 2284 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efecf09e-b27b-4f5e-be85-aa5ee2248d0a-etc-cni-netd\") pod \"cilium-x5z6k\" (UID: \"efecf09e-b27b-4f5e-be85-aa5ee2248d0a\") " pod="kube-system/cilium-x5z6k" Feb 12 20:27:09.762972 env[1808]: time="2024-02-12T20:27:09.762862316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5z6k,Uid:efecf09e-b27b-4f5e-be85-aa5ee2248d0a,Namespace:kube-system,Attempt:0,}" Feb 12 20:27:09.792394 env[1808]: time="2024-02-12T20:27:09.792137539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:09.792679 env[1808]: time="2024-02-12T20:27:09.792606816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:09.792978 env[1808]: time="2024-02-12T20:27:09.792915807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:09.794308 env[1808]: time="2024-02-12T20:27:09.794203518Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099 pid=4200 runtime=io.containerd.runc.v2 Feb 12 20:27:09.817195 kubelet[2284]: E0212 20:27:09.817116 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:09.881183 env[1808]: time="2024-02-12T20:27:09.881123175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5z6k,Uid:efecf09e-b27b-4f5e-be85-aa5ee2248d0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\"" Feb 12 20:27:09.886049 env[1808]: time="2024-02-12T20:27:09.885997593Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:27:09.919633 env[1808]: time="2024-02-12T20:27:09.919563055Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dab23481d1ca80388aba9cbbe2f4f691285a378fdad2569adcf87fc36f1cf376\"" Feb 12 20:27:09.920617 env[1808]: time="2024-02-12T20:27:09.920571666Z" level=info msg="StartContainer for \"dab23481d1ca80388aba9cbbe2f4f691285a378fdad2569adcf87fc36f1cf376\"" Feb 12 20:27:10.020144 env[1808]: time="2024-02-12T20:27:10.020088960Z" level=info msg="StartContainer for \"dab23481d1ca80388aba9cbbe2f4f691285a378fdad2569adcf87fc36f1cf376\" returns successfully" Feb 12 20:27:10.051833 env[1808]: time="2024-02-12T20:27:10.051656271Z" level=info msg="StopPodSandbox for \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\"" Feb 12 20:27:10.055425 kubelet[2284]: I0212 20:27:10.054784 2284 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4f5006c5-4c12-4fe3-99eb-e29caaccf0ce path="/var/lib/kubelet/pods/4f5006c5-4c12-4fe3-99eb-e29caaccf0ce/volumes" Feb 12 20:27:10.056208 env[1808]: time="2024-02-12T20:27:10.055963321Z" level=info msg="TearDown network for sandbox \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" successfully" Feb 12 20:27:10.056942 env[1808]: time="2024-02-12T20:27:10.056872931Z" level=info msg="StopPodSandbox for \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" returns successfully" Feb 12 20:27:10.393487 kubelet[2284]: I0212 20:27:10.393333 2284 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-cpx8p" podStartSLOduration=-9.223372032461494e+09 pod.CreationTimestamp="2024-02-12 20:27:06 +0000 UTC" firstStartedPulling="2024-02-12 20:27:06.898475258 +0000 UTC m=+95.062790828" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:10.393201439 +0000 UTC m=+98.557517021" watchObservedRunningTime="2024-02-12 20:27:10.393280568 +0000 UTC m=+98.557596150" Feb 12 20:27:10.482207 env[1808]: time="2024-02-12T20:27:10.482134633Z" level=info msg="shim disconnected" id=dab23481d1ca80388aba9cbbe2f4f691285a378fdad2569adcf87fc36f1cf376 Feb 12 20:27:10.482905 env[1808]: time="2024-02-12T20:27:10.482203010Z" level=warning msg="cleaning up after shim disconnected" id=dab23481d1ca80388aba9cbbe2f4f691285a378fdad2569adcf87fc36f1cf376 namespace=k8s.io Feb 12 20:27:10.482905 env[1808]: time="2024-02-12T20:27:10.482228414Z" level=info msg="cleaning up dead shim" Feb 12 20:27:10.495173 env[1808]: time="2024-02-12T20:27:10.495105069Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4284 runtime=io.containerd.runc.v2\n" Feb 12 20:27:10.695460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184825703.mount: Deactivated successfully. Feb 12 20:27:10.817674 kubelet[2284]: E0212 20:27:10.817613 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:11.371069 env[1808]: time="2024-02-12T20:27:11.370990822Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:27:11.396884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697599378.mount: Deactivated successfully. Feb 12 20:27:11.407439 env[1808]: time="2024-02-12T20:27:11.407378479Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f0c8d186653e9faebb39b3e8b1ae63d1d9555f006eaee18e7cbcb9698a8df8a5\"" Feb 12 20:27:11.408474 env[1808]: time="2024-02-12T20:27:11.408424457Z" level=info msg="StartContainer for \"f0c8d186653e9faebb39b3e8b1ae63d1d9555f006eaee18e7cbcb9698a8df8a5\"" Feb 12 20:27:11.510984 env[1808]: time="2024-02-12T20:27:11.510921536Z" level=info msg="StartContainer for \"f0c8d186653e9faebb39b3e8b1ae63d1d9555f006eaee18e7cbcb9698a8df8a5\" returns successfully" Feb 12 20:27:11.560302 env[1808]: time="2024-02-12T20:27:11.560235920Z" level=info msg="shim disconnected" id=f0c8d186653e9faebb39b3e8b1ae63d1d9555f006eaee18e7cbcb9698a8df8a5 Feb 12 20:27:11.560647 env[1808]: time="2024-02-12T20:27:11.560610348Z" level=warning msg="cleaning up after shim disconnected" id=f0c8d186653e9faebb39b3e8b1ae63d1d9555f006eaee18e7cbcb9698a8df8a5 namespace=k8s.io Feb 12 20:27:11.560884 env[1808]: time="2024-02-12T20:27:11.560834246Z" level=info msg="cleaning up dead shim" Feb 12 20:27:11.574669 env[1808]: time="2024-02-12T20:27:11.574614218Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4346 runtime=io.containerd.runc.v2\n" Feb 12 20:27:11.697853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0c8d186653e9faebb39b3e8b1ae63d1d9555f006eaee18e7cbcb9698a8df8a5-rootfs.mount: Deactivated successfully. Feb 12 20:27:11.818062 kubelet[2284]: E0212 20:27:11.817986 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:12.377718 env[1808]: time="2024-02-12T20:27:12.377621888Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:27:12.408244 env[1808]: time="2024-02-12T20:27:12.408162403Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a5ac5259dce933aa86e4a9664372aec475a54855e9680a8436b5e4c16acd0c5\"" Feb 12 20:27:12.409466 env[1808]: time="2024-02-12T20:27:12.409410824Z" level=info msg="StartContainer for \"5a5ac5259dce933aa86e4a9664372aec475a54855e9680a8436b5e4c16acd0c5\"" Feb 12 20:27:12.537867 env[1808]: time="2024-02-12T20:27:12.537785550Z" level=info msg="StartContainer for \"5a5ac5259dce933aa86e4a9664372aec475a54855e9680a8436b5e4c16acd0c5\" returns successfully" Feb 12 20:27:12.584870 env[1808]: time="2024-02-12T20:27:12.584797373Z" level=info msg="shim disconnected" id=5a5ac5259dce933aa86e4a9664372aec475a54855e9680a8436b5e4c16acd0c5 Feb 12 20:27:12.584870 env[1808]: time="2024-02-12T20:27:12.584866205Z" level=warning msg="cleaning up after shim disconnected" id=5a5ac5259dce933aa86e4a9664372aec475a54855e9680a8436b5e4c16acd0c5 namespace=k8s.io Feb 12 20:27:12.585227 env[1808]: time="2024-02-12T20:27:12.584889954Z" level=info msg="cleaning up dead shim" Feb 12 20:27:12.598466 env[1808]: time="2024-02-12T20:27:12.598391999Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4406 runtime=io.containerd.runc.v2\n" Feb 12 20:27:12.697953 systemd[1]: run-containerd-runc-k8s.io-5a5ac5259dce933aa86e4a9664372aec475a54855e9680a8436b5e4c16acd0c5-runc.4SzAUf.mount: Deactivated successfully. Feb 12 20:27:12.698222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a5ac5259dce933aa86e4a9664372aec475a54855e9680a8436b5e4c16acd0c5-rootfs.mount: Deactivated successfully. Feb 12 20:27:12.819012 kubelet[2284]: E0212 20:27:12.818973 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:13.380693 env[1808]: time="2024-02-12T20:27:13.380608487Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:27:13.403134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30806312.mount: Deactivated successfully. Feb 12 20:27:13.418894 env[1808]: time="2024-02-12T20:27:13.418811454Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c938998e62fdca44a64ae0d64729ddc8ee6089acf6b733475d2250ec1f16a5b2\"" Feb 12 20:27:13.419710 env[1808]: time="2024-02-12T20:27:13.419624522Z" level=info msg="StartContainer for \"c938998e62fdca44a64ae0d64729ddc8ee6089acf6b733475d2250ec1f16a5b2\"" Feb 12 20:27:13.541081 env[1808]: time="2024-02-12T20:27:13.541020054Z" level=info msg="StartContainer for \"c938998e62fdca44a64ae0d64729ddc8ee6089acf6b733475d2250ec1f16a5b2\" returns successfully" Feb 12 20:27:13.589407 env[1808]: time="2024-02-12T20:27:13.589325633Z" level=info msg="shim disconnected" id=c938998e62fdca44a64ae0d64729ddc8ee6089acf6b733475d2250ec1f16a5b2 Feb 12 20:27:13.589407 env[1808]: time="2024-02-12T20:27:13.589399554Z" level=warning msg="cleaning up after shim disconnected" id=c938998e62fdca44a64ae0d64729ddc8ee6089acf6b733475d2250ec1f16a5b2 namespace=k8s.io Feb 12 20:27:13.589775 env[1808]: time="2024-02-12T20:27:13.589422486Z" level=info msg="cleaning up dead shim" Feb 12 20:27:13.603615 env[1808]: time="2024-02-12T20:27:13.603542438Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4462 runtime=io.containerd.runc.v2\n" Feb 12 20:27:13.698012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c938998e62fdca44a64ae0d64729ddc8ee6089acf6b733475d2250ec1f16a5b2-rootfs.mount: Deactivated successfully. Feb 12 20:27:13.731196 kubelet[2284]: E0212 20:27:13.731133 2284 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:13.820783 kubelet[2284]: E0212 20:27:13.820713 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:13.936054 kubelet[2284]: E0212 20:27:13.936005 2284 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:27:14.387919 env[1808]: time="2024-02-12T20:27:14.387858600Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:27:14.410582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769193970.mount: Deactivated successfully. Feb 12 20:27:14.424894 env[1808]: time="2024-02-12T20:27:14.424794981Z" level=info msg="CreateContainer within sandbox \"de72f186f5232aaa10984fea9272bc68c018256c5597704e9961bbafa065f099\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"80f27381da842e4f1ff17581a0f0c2066f800c24c5655a2076178e28ee61ea79\"" Feb 12 20:27:14.425887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount398329105.mount: Deactivated successfully. Feb 12 20:27:14.427058 env[1808]: time="2024-02-12T20:27:14.426459337Z" level=info msg="StartContainer for \"80f27381da842e4f1ff17581a0f0c2066f800c24c5655a2076178e28ee61ea79\"" Feb 12 20:27:14.532081 env[1808]: time="2024-02-12T20:27:14.531935256Z" level=info msg="StartContainer for \"80f27381da842e4f1ff17581a0f0c2066f800c24c5655a2076178e28ee61ea79\" returns successfully" Feb 12 20:27:14.821340 kubelet[2284]: E0212 20:27:14.821266 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:15.230257 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 20:27:15.421630 kubelet[2284]: I0212 20:27:15.421590 2284 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x5z6k" podStartSLOduration=6.421515509 pod.CreationTimestamp="2024-02-12 20:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:15.420076516 +0000 UTC m=+103.584392110" watchObservedRunningTime="2024-02-12 20:27:15.421515509 +0000 UTC m=+103.585831103" Feb 12 20:27:15.821751 kubelet[2284]: E0212 20:27:15.821688 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:16.822636 kubelet[2284]: E0212 20:27:16.822594 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:17.824440 kubelet[2284]: E0212 20:27:17.824379 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:18.825147 kubelet[2284]: E0212 20:27:18.825070 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:19.001599 (udev-worker)[5019]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:27:19.009644 systemd-networkd[1592]: lxc_health: Link UP Feb 12 20:27:19.020535 (udev-worker)[5021]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:27:19.030398 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:27:19.026571 systemd-networkd[1592]: lxc_health: Gained carrier Feb 12 20:27:19.826272 kubelet[2284]: E0212 20:27:19.826154 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:20.827254 kubelet[2284]: E0212 20:27:20.827181 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:21.024064 systemd-networkd[1592]: lxc_health: Gained IPv6LL Feb 12 20:27:21.828152 kubelet[2284]: E0212 20:27:21.828050 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:22.828922 kubelet[2284]: E0212 20:27:22.828858 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:22.918123 systemd[1]: run-containerd-runc-k8s.io-80f27381da842e4f1ff17581a0f0c2066f800c24c5655a2076178e28ee61ea79-runc.N0opex.mount: Deactivated successfully. Feb 12 20:27:23.829737 kubelet[2284]: E0212 20:27:23.829662 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:24.829954 kubelet[2284]: E0212 20:27:24.829885 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:25.830635 kubelet[2284]: E0212 20:27:25.830563 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:26.830823 kubelet[2284]: E0212 20:27:26.830755 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:27.831137 kubelet[2284]: E0212 20:27:27.831066 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:28.832223 kubelet[2284]: E0212 20:27:28.832177 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:29.833693 kubelet[2284]: E0212 20:27:29.833634 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:30.834422 kubelet[2284]: E0212 20:27:30.834360 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:31.835613 kubelet[2284]: E0212 20:27:31.835534 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:32.836855 kubelet[2284]: E0212 20:27:32.836797 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:33.730765 kubelet[2284]: E0212 20:27:33.730690 2284 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:33.752909 env[1808]: time="2024-02-12T20:27:33.752850871Z" level=info msg="StopPodSandbox for \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\"" Feb 12 20:27:33.753544 env[1808]: time="2024-02-12T20:27:33.752998520Z" level=info msg="TearDown network for sandbox \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" successfully" Feb 12 20:27:33.753544 env[1808]: time="2024-02-12T20:27:33.753057669Z" level=info msg="StopPodSandbox for \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" returns successfully" Feb 12 20:27:33.754311 env[1808]: time="2024-02-12T20:27:33.754256860Z" level=info msg="RemovePodSandbox for \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\"" Feb 12 20:27:33.754448 env[1808]: time="2024-02-12T20:27:33.754323820Z" level=info msg="Forcibly stopping sandbox \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\"" Feb 12 20:27:33.754519 env[1808]: time="2024-02-12T20:27:33.754456937Z" level=info msg="TearDown network for sandbox \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" successfully" Feb 12 20:27:33.762024 env[1808]: time="2024-02-12T20:27:33.761940975Z" level=info msg="RemovePodSandbox \"f84e7115a85b66161e5e49c08779e9494315d03a2ea5f460918a2fb6c9a5d23d\" returns successfully" Feb 12 20:27:33.765341 env[1808]: time="2024-02-12T20:27:33.765273251Z" level=info msg="StopPodSandbox for \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\"" Feb 12 20:27:33.765508 env[1808]: time="2024-02-12T20:27:33.765417924Z" level=info msg="TearDown network for sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" successfully" Feb 12 20:27:33.765508 env[1808]: time="2024-02-12T20:27:33.765477972Z" level=info msg="StopPodSandbox for \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" returns successfully" Feb 12 20:27:33.766319 env[1808]: time="2024-02-12T20:27:33.766253009Z" level=info msg="RemovePodSandbox for \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\"" Feb 12 20:27:33.766561 env[1808]: time="2024-02-12T20:27:33.766484995Z" level=info msg="Forcibly stopping sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\"" Feb 12 20:27:33.766819 env[1808]: time="2024-02-12T20:27:33.766783988Z" level=info msg="TearDown network for sandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" successfully" Feb 12 20:27:33.773064 env[1808]: time="2024-02-12T20:27:33.773010970Z" level=info msg="RemovePodSandbox \"18aa4582aff9e54d08be8496254a4206ac670b0306de013db0df5c962c555014\" returns successfully" Feb 12 20:27:33.837307 kubelet[2284]: E0212 20:27:33.837270 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:34.839022 kubelet[2284]: E0212 20:27:34.838962 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:35.839753 kubelet[2284]: E0212 20:27:35.839680 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:36.839946 kubelet[2284]: E0212 20:27:36.839884 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:37.840347 kubelet[2284]: E0212 20:27:37.840302 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:38.841899 kubelet[2284]: E0212 20:27:38.841827 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:38.853469 kubelet[2284]: E0212 20:27:38.853110 2284 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T20:27:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T20:27:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T20:27:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T20:27:28Z\\\",\\\"lastTransitionTime\\\":\\\"2024-02-12T20:27:28Z\\\",\\\"message\\\":\\\"kubelet is posting ready status\\\",\\\"reason\\\":\\\"KubeletReady\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"172.31.25.148\": Patch \"https://172.31.20.22:6443/api/v1/nodes/172.31.25.148/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 20:27:39.045081 kubelet[2284]: E0212 20:27:39.044962 2284 controller.go:189] failed to update lease, error: Put "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 20:27:39.674053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c569fdd8e567dc495d8cdfe7471ba26cf8907077621f811235636b1c948f583-rootfs.mount: Deactivated successfully. Feb 12 20:27:39.789086 env[1808]: time="2024-02-12T20:27:39.789022815Z" level=info msg="shim disconnected" id=4c569fdd8e567dc495d8cdfe7471ba26cf8907077621f811235636b1c948f583 Feb 12 20:27:39.789975 env[1808]: time="2024-02-12T20:27:39.789932288Z" level=warning msg="cleaning up after shim disconnected" id=4c569fdd8e567dc495d8cdfe7471ba26cf8907077621f811235636b1c948f583 namespace=k8s.io Feb 12 20:27:39.790105 env[1808]: time="2024-02-12T20:27:39.790073949Z" level=info msg="cleaning up dead shim" Feb 12 20:27:39.805825 env[1808]: time="2024-02-12T20:27:39.805762665Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5141 runtime=io.containerd.runc.v2\n" Feb 12 20:27:39.843054 kubelet[2284]: E0212 20:27:39.842952 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:40.457935 kubelet[2284]: I0212 20:27:40.457900 2284 scope.go:115] "RemoveContainer" containerID="4c569fdd8e567dc495d8cdfe7471ba26cf8907077621f811235636b1c948f583" Feb 12 20:27:40.461767 env[1808]: time="2024-02-12T20:27:40.461678027Z" level=info msg="CreateContainer within sandbox \"774249ec6072a58ac2aad36eae5b55ba1883c0dc36e7fcd7b1aa6be7102bb715\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Feb 12 20:27:40.483687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553174148.mount: Deactivated successfully. Feb 12 20:27:40.491031 env[1808]: time="2024-02-12T20:27:40.490968921Z" level=info msg="CreateContainer within sandbox \"774249ec6072a58ac2aad36eae5b55ba1883c0dc36e7fcd7b1aa6be7102bb715\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"50d4d55d5ef09f9e5501872d4ac93d1e7d2837968e1f68c128c09783c6cc399d\"" Feb 12 20:27:40.491980 env[1808]: time="2024-02-12T20:27:40.491921354Z" level=info msg="StartContainer for \"50d4d55d5ef09f9e5501872d4ac93d1e7d2837968e1f68c128c09783c6cc399d\"" Feb 12 20:27:40.598171 env[1808]: time="2024-02-12T20:27:40.595623342Z" level=info msg="StartContainer for \"50d4d55d5ef09f9e5501872d4ac93d1e7d2837968e1f68c128c09783c6cc399d\" returns successfully" Feb 12 20:27:40.843883 kubelet[2284]: E0212 20:27:40.843741 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:41.844324 kubelet[2284]: E0212 20:27:41.844264 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:42.845012 kubelet[2284]: E0212 20:27:42.844928 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:43.846203 kubelet[2284]: E0212 20:27:43.846138 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:44.846698 kubelet[2284]: E0212 20:27:44.846658 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:45.847797 kubelet[2284]: E0212 20:27:45.847751 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:46.849499 kubelet[2284]: E0212 20:27:46.849449 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:47.850649 kubelet[2284]: E0212 20:27:47.850582 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:48.850995 kubelet[2284]: E0212 20:27:48.850923 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:48.854211 kubelet[2284]: E0212 20:27:48.854179 2284 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.25.148\": Get \"https://172.31.20.22:6443/api/v1/nodes/172.31.25.148?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 20:27:49.045656 kubelet[2284]: E0212 20:27:49.045604 2284 controller.go:189] failed to update lease, error: Put "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 20:27:49.851291 kubelet[2284]: E0212 20:27:49.851246 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:50.852584 kubelet[2284]: E0212 20:27:50.852544 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:51.854076 kubelet[2284]: E0212 20:27:51.854004 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:52.854615 kubelet[2284]: E0212 20:27:52.854515 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:53.730676 kubelet[2284]: E0212 20:27:53.730611 2284 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:53.855662 kubelet[2284]: E0212 20:27:53.855599 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:54.856104 kubelet[2284]: E0212 20:27:54.856040 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:55.857127 kubelet[2284]: E0212 20:27:55.857081 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:56.858801 kubelet[2284]: E0212 20:27:56.858760 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:57.859945 kubelet[2284]: E0212 20:27:57.859883 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:58.854904 kubelet[2284]: E0212 20:27:58.854866 2284 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.25.148\": Get \"https://172.31.20.22:6443/api/v1/nodes/172.31.25.148?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 20:27:58.860046 kubelet[2284]: E0212 20:27:58.859991 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:59.046328 kubelet[2284]: E0212 20:27:59.046269 2284 controller.go:189] failed to update lease, error: Put "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 20:27:59.860231 kubelet[2284]: E0212 20:27:59.860162 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:00.860875 kubelet[2284]: E0212 20:28:00.860807 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:01.861044 kubelet[2284]: E0212 20:28:01.860969 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:02.861591 kubelet[2284]: E0212 20:28:02.861546 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:03.863350 kubelet[2284]: E0212 20:28:03.863282 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:04.863594 kubelet[2284]: E0212 20:28:04.863515 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:05.864024 kubelet[2284]: E0212 20:28:05.863950 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:06.864986 kubelet[2284]: E0212 20:28:06.864913 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:06.963021 kubelet[2284]: E0212 20:28:06.962916 2284 controller.go:189] failed to update lease, error: Put "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": unexpected EOF Feb 12 20:28:06.965296 kubelet[2284]: E0212 20:28:06.965105 2284 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cilium-operator-f59cbd8c6-cpx8p.17b3377541def617", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"cilium-operator-f59cbd8c6-cpx8p", UID:"8384779c-ed83-422f-b2b5-cbfa316321fc", APIVersion:"v1", ResourceVersion:"920", FieldPath:"spec.containers{cilium-operator}"}, Reason:"Pulled", Message:"Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 27, 40, 459382295, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 27, 40, 459382295, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.20.22:6443/api/v1/namespaces/kube-system/events": unexpected EOF'(may retry after sleeping) Feb 12 20:28:06.976166 kubelet[2284]: E0212 20:28:06.976107 2284 controller.go:189] failed to update lease, error: Put "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": read tcp 172.31.25.148:54716->172.31.20.22:6443: read: connection reset by peer Feb 12 20:28:06.976166 kubelet[2284]: I0212 20:28:06.976161 2284 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 12 20:28:06.976852 kubelet[2284]: E0212 20:28:06.976788 2284 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": dial tcp 172.31.20.22:6443: connect: connection refused Feb 12 20:28:07.178284 kubelet[2284]: E0212 20:28:07.178123 2284 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": dial tcp 172.31.20.22:6443: connect: connection refused Feb 12 20:28:07.579980 kubelet[2284]: E0212 20:28:07.579922 2284 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": dial tcp 172.31.20.22:6443: connect: connection refused Feb 12 20:28:07.866062 kubelet[2284]: E0212 20:28:07.865928 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:07.963310 kubelet[2284]: E0212 20:28:07.963270 2284 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.25.148\": Get \"https://172.31.20.22:6443/api/v1/nodes/172.31.25.148?timeout=10s\": dial tcp 172.31.20.22:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Feb 12 20:28:07.964189 kubelet[2284]: E0212 20:28:07.964140 2284 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.25.148\": Get \"https://172.31.20.22:6443/api/v1/nodes/172.31.25.148?timeout=10s\": dial tcp 172.31.20.22:6443: connect: connection refused" Feb 12 20:28:07.964189 kubelet[2284]: E0212 20:28:07.964181 2284 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 12 20:28:07.965794 kubelet[2284]: I0212 20:28:07.965745 2284 status_manager.go:698] "Failed to get status for pod" podUID=8384779c-ed83-422f-b2b5-cbfa316321fc pod="kube-system/cilium-operator-f59cbd8c6-cpx8p" err="Get \"https://172.31.20.22:6443/api/v1/namespaces/kube-system/pods/cilium-operator-f59cbd8c6-cpx8p\": dial tcp 172.31.20.22:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Feb 12 20:28:07.966289 kubelet[2284]: I0212 20:28:07.966244 2284 status_manager.go:698] "Failed to get status for pod" podUID=8384779c-ed83-422f-b2b5-cbfa316321fc pod="kube-system/cilium-operator-f59cbd8c6-cpx8p" err="Get \"https://172.31.20.22:6443/api/v1/namespaces/kube-system/pods/cilium-operator-f59cbd8c6-cpx8p\": dial tcp 172.31.20.22:6443: connect: connection refused" Feb 12 20:28:07.967095 kubelet[2284]: I0212 20:28:07.967067 2284 status_manager.go:698] "Failed to get status for pod" podUID=8384779c-ed83-422f-b2b5-cbfa316321fc pod="kube-system/cilium-operator-f59cbd8c6-cpx8p" err="Get \"https://172.31.20.22:6443/api/v1/namespaces/kube-system/pods/cilium-operator-f59cbd8c6-cpx8p\": dial tcp 172.31.20.22:6443: connect: connection refused" Feb 12 20:28:08.381432 kubelet[2284]: E0212 20:28:08.381378 2284 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": dial tcp 172.31.20.22:6443: connect: connection refused Feb 12 20:28:08.867138 kubelet[2284]: E0212 20:28:08.867081 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:09.868177 kubelet[2284]: E0212 20:28:09.868106 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:09.982909 kubelet[2284]: E0212 20:28:09.982813 2284 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": dial tcp 172.31.20.22:6443: connect: connection refused Feb 12 20:28:10.868312 kubelet[2284]: E0212 20:28:10.868260 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:11.127266 kubelet[2284]: E0212 20:28:11.126760 2284 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"cilium-operator-f59cbd8c6-cpx8p.17b3377541def617", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"cilium-operator-f59cbd8c6-cpx8p", UID:"8384779c-ed83-422f-b2b5-cbfa316321fc", APIVersion:"v1", ResourceVersion:"920", FieldPath:"spec.containers{cilium-operator}"}, Reason:"Pulled", Message:"Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine", Source:v1.EventSource{Component:"kubelet", Host:"172.31.25.148"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 27, 40, 459382295, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 27, 40, 459382295, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.20.22:6443/api/v1/namespaces/kube-system/events": dial tcp 172.31.20.22:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:28:11.869782 kubelet[2284]: E0212 20:28:11.869717 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:12.871490 kubelet[2284]: E0212 20:28:12.871453 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:13.730654 kubelet[2284]: E0212 20:28:13.730613 2284 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:13.872889 kubelet[2284]: E0212 20:28:13.872829 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:14.874021 kubelet[2284]: E0212 20:28:14.873955 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:15.874355 kubelet[2284]: E0212 20:28:15.874311 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:16.875759 kubelet[2284]: E0212 20:28:16.875690 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:17.876858 kubelet[2284]: E0212 20:28:17.876791 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:18.877233 kubelet[2284]: E0212 20:28:18.877163 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:19.877526 kubelet[2284]: E0212 20:28:19.877463 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:20.878154 kubelet[2284]: E0212 20:28:20.878085 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:21.878811 kubelet[2284]: E0212 20:28:21.878768 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:22.879946 kubelet[2284]: E0212 20:28:22.879877 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:23.184237 kubelet[2284]: E0212 20:28:23.183840 2284 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: Get "https://172.31.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.148?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 12 20:28:23.880319 kubelet[2284]: E0212 20:28:23.880276 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:24.881954 kubelet[2284]: E0212 20:28:24.881912 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:25.883282 kubelet[2284]: E0212 20:28:25.883214 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:26.883843 kubelet[2284]: E0212 20:28:26.883790 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:27.884946 kubelet[2284]: E0212 20:28:27.884883 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:28.162654 kubelet[2284]: E0212 20:28:28.162258 2284 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.25.148\": Get \"https://172.31.20.22:6443/api/v1/nodes/172.31.25.148?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 12 20:28:28.885309 kubelet[2284]: E0212 20:28:28.885272 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:29.887042 kubelet[2284]: E0212 20:28:29.886979 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:30.888015 kubelet[2284]: E0212 20:28:30.887971 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:28:31.889631 kubelet[2284]: E0212 20:28:31.889561 2284 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"