Dec 13 01:54:43.216577 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:54:43.216622 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:54:43.216647 kernel: KASLR disabled due to lack of seed Dec 13 01:54:43.216664 kernel: efi: EFI v2.7 by EDK II Dec 13 01:54:43.216680 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:54:43.216695 kernel: ACPI: Early table checksum verification disabled Dec 13 01:54:43.216713 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:54:43.216728 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:54:43.216744 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:54:43.216760 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:54:43.216780 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:54:43.216796 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:54:43.216812 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:54:43.216827 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:54:43.216846 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:54:43.216867 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:54:43.216884 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:54:43.216901 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:54:43.216918 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:54:43.216934 kernel: printk: bootconsole [uart0] enabled Dec 13 01:54:43.216959 kernel: NUMA: Failed to initialise from firmware Dec 13 01:54:43.217078 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:43.217123 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:54:43.217165 kernel: Zone ranges: Dec 13 01:54:43.217207 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:54:43.217248 kernel: DMA32 empty Dec 13 01:54:43.217299 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:54:43.217341 kernel: Movable zone start for each node Dec 13 01:54:43.217381 kernel: Early memory node ranges Dec 13 01:54:43.217422 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:54:43.217464 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:54:43.217505 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:54:43.217546 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:54:43.217587 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:54:43.217629 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:54:43.217670 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:54:43.217711 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:54:43.217752 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:43.217801 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:54:43.217844 kernel: psci: probing for conduit method from ACPI. Dec 13 01:54:43.217902 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:54:43.217945 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:54:43.219077 kernel: psci: Trusted OS migration not required Dec 13 01:54:43.219138 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:54:43.219183 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:54:43.219227 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:54:43.219271 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:54:43.219315 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:54:43.219360 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:54:43.219403 kernel: CPU features: detected: Spectre-v2 Dec 13 01:54:43.219447 kernel: CPU features: detected: Spectre-v3a Dec 13 01:54:43.219491 kernel: CPU features: detected: Spectre-BHB Dec 13 01:54:43.219535 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:54:43.219579 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:54:43.219631 kernel: alternatives: applying boot alternatives Dec 13 01:54:43.219683 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:43.219729 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:54:43.219774 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:54:43.219819 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:54:43.219863 kernel: Fallback order for Node 0: 0 Dec 13 01:54:43.219908 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:54:43.219951 kernel: Policy zone: Normal Dec 13 01:54:43.220037 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:54:43.220059 kernel: software IO TLB: area num 2. Dec 13 01:54:43.220076 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:54:43.220102 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:54:43.220120 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:54:43.220138 kernel: trace event string verifier disabled Dec 13 01:54:43.220155 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:54:43.220173 kernel: rcu: RCU event tracing is enabled. Dec 13 01:54:43.220209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:54:43.220230 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:54:43.220248 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:54:43.220267 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:54:43.220284 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:54:43.220301 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:54:43.220324 kernel: GICv3: 96 SPIs implemented Dec 13 01:54:43.220342 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:54:43.220359 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:54:43.220376 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:54:43.220393 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:54:43.220410 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:54:43.220428 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:54:43.220446 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:54:43.220463 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:54:43.220480 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:54:43.220497 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:54:43.220514 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:54:43.220536 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:54:43.220554 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:54:43.220572 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:54:43.220590 kernel: Console: colour dummy device 80x25 Dec 13 01:54:43.220608 kernel: printk: console [tty1] enabled Dec 13 01:54:43.220626 kernel: ACPI: Core revision 20230628 Dec 13 01:54:43.220644 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:54:43.220662 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:54:43.220680 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:54:43.220698 kernel: landlock: Up and running. Dec 13 01:54:43.220719 kernel: SELinux: Initializing. Dec 13 01:54:43.220737 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:43.220755 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:43.220773 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:43.220791 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:43.220808 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:54:43.220826 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:54:43.220844 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:54:43.220865 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:54:43.220883 kernel: Remapping and enabling EFI services. Dec 13 01:54:43.220900 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:54:43.220918 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:54:43.220936 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:54:43.220953 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:54:43.220971 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:54:43.223041 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:54:43.223061 kernel: SMP: Total of 2 processors activated. Dec 13 01:54:43.223080 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:54:43.223105 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:54:43.223123 kernel: CPU features: detected: CRC32 instructions Dec 13 01:54:43.223153 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:54:43.223176 kernel: alternatives: applying system-wide alternatives Dec 13 01:54:43.223194 kernel: devtmpfs: initialized Dec 13 01:54:43.223213 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:54:43.223231 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:54:43.223250 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:54:43.223269 kernel: SMBIOS 3.0.0 present. Dec 13 01:54:43.223294 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:54:43.223313 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:54:43.223331 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:54:43.223350 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:54:43.223369 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:54:43.223387 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:54:43.223406 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Dec 13 01:54:43.223428 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:54:43.223447 kernel: cpuidle: using governor menu Dec 13 01:54:43.223465 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:54:43.223484 kernel: ASID allocator initialised with 65536 entries Dec 13 01:54:43.223502 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:54:43.223521 kernel: Serial: AMBA PL011 UART driver Dec 13 01:54:43.223539 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:54:43.223558 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:54:43.223577 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:54:43.223599 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:54:43.223618 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:54:43.223637 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:54:43.223656 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:54:43.223675 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:54:43.223694 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:54:43.223712 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:54:43.223730 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:54:43.223749 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:54:43.223772 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:54:43.223791 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:54:43.223810 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:54:43.223828 kernel: ACPI: Interpreter enabled Dec 13 01:54:43.223846 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:54:43.223864 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:54:43.223883 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:54:43.224229 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:54:43.224455 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:54:43.224658 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:54:43.224859 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:54:43.227136 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:54:43.227171 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:54:43.227191 kernel: acpiphp: Slot [1] registered Dec 13 01:54:43.227210 kernel: acpiphp: Slot [2] registered Dec 13 01:54:43.227228 kernel: acpiphp: Slot [3] registered Dec 13 01:54:43.227254 kernel: acpiphp: Slot [4] registered Dec 13 01:54:43.227273 kernel: acpiphp: Slot [5] registered Dec 13 01:54:43.227291 kernel: acpiphp: Slot [6] registered Dec 13 01:54:43.227309 kernel: acpiphp: Slot [7] registered Dec 13 01:54:43.227328 kernel: acpiphp: Slot [8] registered Dec 13 01:54:43.227345 kernel: acpiphp: Slot [9] registered Dec 13 01:54:43.227364 kernel: acpiphp: Slot [10] registered Dec 13 01:54:43.227382 kernel: acpiphp: Slot [11] registered Dec 13 01:54:43.227400 kernel: acpiphp: Slot [12] registered Dec 13 01:54:43.227418 kernel: acpiphp: Slot [13] registered Dec 13 01:54:43.227442 kernel: acpiphp: Slot [14] registered Dec 13 01:54:43.227460 kernel: acpiphp: Slot [15] registered Dec 13 01:54:43.227479 kernel: acpiphp: Slot [16] registered Dec 13 01:54:43.227497 kernel: acpiphp: Slot [17] registered Dec 13 01:54:43.227516 kernel: acpiphp: Slot [18] registered Dec 13 01:54:43.227534 kernel: acpiphp: Slot [19] registered Dec 13 01:54:43.227553 kernel: acpiphp: Slot [20] registered Dec 13 01:54:43.227571 kernel: acpiphp: Slot [21] registered Dec 13 01:54:43.227590 kernel: acpiphp: Slot [22] registered Dec 13 01:54:43.227613 kernel: acpiphp: Slot [23] registered Dec 13 01:54:43.227632 kernel: acpiphp: Slot [24] registered Dec 13 01:54:43.227650 kernel: acpiphp: Slot [25] registered Dec 13 01:54:43.227668 kernel: acpiphp: Slot [26] registered Dec 13 01:54:43.227686 kernel: acpiphp: Slot [27] registered Dec 13 01:54:43.227705 kernel: acpiphp: Slot [28] registered Dec 13 01:54:43.227723 kernel: acpiphp: Slot [29] registered Dec 13 01:54:43.227741 kernel: acpiphp: Slot [30] registered Dec 13 01:54:43.227759 kernel: acpiphp: Slot [31] registered Dec 13 01:54:43.227777 kernel: PCI host bridge to bus 0000:00 Dec 13 01:54:43.228082 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:54:43.228302 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:54:43.228488 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:43.228669 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:54:43.228902 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:54:43.232470 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:54:43.232764 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:54:43.233078 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:54:43.233303 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:54:43.233520 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:43.233764 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:54:43.233969 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:54:43.238424 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:54:43.238652 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:54:43.238866 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:43.239126 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:54:43.239336 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:54:43.239547 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:54:43.239753 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:54:43.239966 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:54:43.240250 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:54:43.240817 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:54:43.242230 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:43.242258 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:54:43.242278 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:54:43.242297 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:54:43.242316 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:54:43.242334 kernel: iommu: Default domain type: Translated Dec 13 01:54:43.242362 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:54:43.242381 kernel: efivars: Registered efivars operations Dec 13 01:54:43.242399 kernel: vgaarb: loaded Dec 13 01:54:43.242417 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:54:43.242436 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:54:43.242454 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:54:43.242472 kernel: pnp: PnP ACPI init Dec 13 01:54:43.242684 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:54:43.242717 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:54:43.242737 kernel: NET: Registered PF_INET protocol family Dec 13 01:54:43.242755 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:54:43.242774 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:54:43.242793 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:54:43.242811 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:54:43.242830 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:54:43.242848 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:54:43.242866 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:43.242889 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:43.242909 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:54:43.242927 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:54:43.242945 kernel: kvm [1]: HYP mode not available Dec 13 01:54:43.242963 kernel: Initialise system trusted keyrings Dec 13 01:54:43.243029 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:54:43.243052 kernel: Key type asymmetric registered Dec 13 01:54:43.243070 kernel: Asymmetric key parser 'x509' registered Dec 13 01:54:43.243089 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:54:43.243114 kernel: io scheduler mq-deadline registered Dec 13 01:54:43.243133 kernel: io scheduler kyber registered Dec 13 01:54:43.243151 kernel: io scheduler bfq registered Dec 13 01:54:43.243381 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:54:43.243409 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:54:43.243429 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:54:43.243447 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:54:43.243466 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:54:43.243490 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:54:43.243509 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:54:43.243714 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:54:43.243739 kernel: printk: console [ttyS0] disabled Dec 13 01:54:43.243759 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:54:43.243778 kernel: printk: console [ttyS0] enabled Dec 13 01:54:43.243796 kernel: printk: bootconsole [uart0] disabled Dec 13 01:54:43.243814 kernel: thunder_xcv, ver 1.0 Dec 13 01:54:43.243832 kernel: thunder_bgx, ver 1.0 Dec 13 01:54:43.243850 kernel: nicpf, ver 1.0 Dec 13 01:54:43.243874 kernel: nicvf, ver 1.0 Dec 13 01:54:43.244134 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:54:43.244349 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:54:42 UTC (1734054882) Dec 13 01:54:43.244376 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:54:43.244396 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:54:43.244415 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:54:43.244433 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:54:43.244458 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:54:43.244477 kernel: Segment Routing with IPv6 Dec 13 01:54:43.244495 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:54:43.244514 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:54:43.244532 kernel: Key type dns_resolver registered Dec 13 01:54:43.244550 kernel: registered taskstats version 1 Dec 13 01:54:43.244568 kernel: Loading compiled-in X.509 certificates Dec 13 01:54:43.244587 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:54:43.244605 kernel: Key type .fscrypt registered Dec 13 01:54:43.244622 kernel: Key type fscrypt-provisioning registered Dec 13 01:54:43.244645 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:54:43.244663 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:54:43.244682 kernel: ima: No architecture policies found Dec 13 01:54:43.244700 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:54:43.244718 kernel: clk: Disabling unused clocks Dec 13 01:54:43.244736 kernel: Freeing unused kernel memory: 39360K Dec 13 01:54:43.244755 kernel: Run /init as init process Dec 13 01:54:43.244773 kernel: with arguments: Dec 13 01:54:43.244791 kernel: /init Dec 13 01:54:43.244813 kernel: with environment: Dec 13 01:54:43.244831 kernel: HOME=/ Dec 13 01:54:43.244849 kernel: TERM=linux Dec 13 01:54:43.244867 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:54:43.244890 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:43.244913 systemd[1]: Detected virtualization amazon. Dec 13 01:54:43.244933 systemd[1]: Detected architecture arm64. Dec 13 01:54:43.244957 systemd[1]: Running in initrd. Dec 13 01:54:43.245011 systemd[1]: No hostname configured, using default hostname. Dec 13 01:54:43.245036 systemd[1]: Hostname set to <localhost>. Dec 13 01:54:43.245058 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:43.245079 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:54:43.245099 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:43.245119 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:43.245141 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:54:43.245169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:43.245190 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:54:43.245211 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:54:43.245235 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:54:43.245256 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:54:43.245276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:43.245296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:43.245321 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:43.245342 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:43.245362 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:43.245382 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:43.245402 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:43.245422 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:43.245443 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:43.245464 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:43.245484 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:43.245510 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:43.245530 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:43.245550 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:43.245570 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:54:43.245591 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:43.245611 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:54:43.245631 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:54:43.245651 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:43.245677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:43.245697 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:43.245718 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:43.245738 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:43.245758 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:54:43.245823 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 01:54:43.245873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:43.245894 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:54:43.245914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:43.245939 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:43.245960 systemd-journald[251]: Journal started Dec 13 01:54:43.246034 systemd-journald[251]: Runtime Journal (/run/log/journal/ec272be647ae28de39a4def1a12e2a54) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:43.202130 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 01:54:43.250072 kernel: Bridge firewalling registered Dec 13 01:54:43.250610 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 01:54:43.257118 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:43.259056 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:43.276321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:43.289465 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:43.300237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:43.307277 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:43.316937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:43.344693 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:43.352081 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:43.370310 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:43.377964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:43.395763 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:54:43.429015 dracut-cmdline[288]: dracut-dracut-053 Dec 13 01:54:43.432664 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:43.475265 systemd-resolved[283]: Positive Trust Anchors: Dec 13 01:54:43.475307 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:43.477300 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:43.593039 kernel: SCSI subsystem initialized Dec 13 01:54:43.600099 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:54:43.613094 kernel: iscsi: registered transport (tcp) Dec 13 01:54:43.635102 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:54:43.635170 kernel: QLogic iSCSI HBA Driver Dec 13 01:54:43.726117 kernel: random: crng init done Dec 13 01:54:43.726446 systemd-resolved[283]: Defaulting to hostname 'linux'. Dec 13 01:54:43.730056 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:43.734065 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:43.757800 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:43.771337 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:54:43.805024 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:54:43.805100 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:54:43.807002 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:54:43.872018 kernel: raid6: neonx8 gen() 6692 MB/s Dec 13 01:54:43.889011 kernel: raid6: neonx4 gen() 6503 MB/s Dec 13 01:54:43.906008 kernel: raid6: neonx2 gen() 5434 MB/s Dec 13 01:54:43.923008 kernel: raid6: neonx1 gen() 3944 MB/s Dec 13 01:54:43.940009 kernel: raid6: int64x8 gen() 3796 MB/s Dec 13 01:54:43.957007 kernel: raid6: int64x4 gen() 3717 MB/s Dec 13 01:54:43.974007 kernel: raid6: int64x2 gen() 3604 MB/s Dec 13 01:54:43.991744 kernel: raid6: int64x1 gen() 2772 MB/s Dec 13 01:54:43.991793 kernel: raid6: using algorithm neonx8 gen() 6692 MB/s Dec 13 01:54:44.009757 kernel: raid6: .... xor() 4869 MB/s, rmw enabled Dec 13 01:54:44.009825 kernel: raid6: using neon recovery algorithm Dec 13 01:54:44.018404 kernel: xor: measuring software checksum speed Dec 13 01:54:44.018462 kernel: 8regs : 10678 MB/sec Dec 13 01:54:44.019597 kernel: 32regs : 11627 MB/sec Dec 13 01:54:44.020908 kernel: arm64_neon : 9519 MB/sec Dec 13 01:54:44.020940 kernel: xor: using function: 32regs (11627 MB/sec) Dec 13 01:54:44.106028 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:54:44.125668 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:44.136322 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:44.175931 systemd-udevd[470]: Using default interface naming scheme 'v255'. Dec 13 01:54:44.185368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:44.205411 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:54:44.234010 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Dec 13 01:54:44.290888 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:44.304808 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:44.417030 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:44.434432 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:54:44.468026 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:44.471966 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:44.474848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:44.475761 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:44.503515 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:54:44.538615 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:44.612031 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:54:44.617945 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:54:44.648258 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:54:44.653897 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:54:44.654235 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ee:55:78:8e:d7 Dec 13 01:54:44.654469 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:54:44.654498 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:54:44.642294 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:44.642524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:44.645418 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:44.647547 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:44.648173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:44.650443 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:44.662972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:44.680084 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:54:44.687843 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:54:44.687909 kernel: GPT:9289727 != 16777215 Dec 13 01:54:44.687936 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:54:44.687961 kernel: GPT:9289727 != 16777215 Dec 13 01:54:44.689263 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:54:44.690263 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:44.694612 (udev-worker)[531]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:44.705111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:44.721306 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:44.769446 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:44.818006 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (521) Dec 13 01:54:44.844217 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (527) Dec 13 01:54:44.851536 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:54:44.935421 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:54:44.951881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:44.966901 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:44.971940 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:44.989285 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:54:45.002239 disk-uuid[661]: Primary Header is updated. Dec 13 01:54:45.002239 disk-uuid[661]: Secondary Entries is updated. Dec 13 01:54:45.002239 disk-uuid[661]: Secondary Header is updated. Dec 13 01:54:45.019019 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:45.025033 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:45.033008 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:46.032005 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:46.034832 disk-uuid[662]: The operation has completed successfully. Dec 13 01:54:46.213946 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:54:46.216044 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:54:46.261315 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:54:46.282069 sh[1005]: Success Dec 13 01:54:46.306026 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:54:46.411875 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:54:46.423207 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:54:46.428570 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:54:46.471686 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:54:46.471747 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:46.473485 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:54:46.475745 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:54:46.475778 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:54:46.532016 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:54:46.546128 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:54:46.549756 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:54:46.567334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:54:46.574277 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:54:46.607764 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:46.607838 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:46.609076 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:46.625020 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:46.642112 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:54:46.645233 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:46.654766 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:54:46.668359 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:54:46.756527 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:46.773255 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:46.821542 systemd-networkd[1207]: lo: Link UP Dec 13 01:54:46.821564 systemd-networkd[1207]: lo: Gained carrier Dec 13 01:54:46.824382 systemd-networkd[1207]: Enumeration completed Dec 13 01:54:46.825188 systemd-networkd[1207]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:46.825195 systemd-networkd[1207]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:46.826876 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:46.831095 systemd[1]: Reached target network.target - Network. Dec 13 01:54:46.844329 systemd-networkd[1207]: eth0: Link UP Dec 13 01:54:46.844347 systemd-networkd[1207]: eth0: Gained carrier Dec 13 01:54:46.844366 systemd-networkd[1207]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:46.860069 systemd-networkd[1207]: eth0: DHCPv4 address 172.31.28.238/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:47.037323 ignition[1120]: Ignition 2.19.0 Dec 13 01:54:47.037345 ignition[1120]: Stage: fetch-offline Dec 13 01:54:47.037877 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:47.037902 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:47.038911 ignition[1120]: Ignition finished successfully Dec 13 01:54:47.048551 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:47.062417 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:54:47.086956 ignition[1216]: Ignition 2.19.0 Dec 13 01:54:47.087013 ignition[1216]: Stage: fetch Dec 13 01:54:47.088637 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:47.088663 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:47.088884 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:47.110915 ignition[1216]: PUT result: OK Dec 13 01:54:47.115195 ignition[1216]: parsed url from cmdline: "" Dec 13 01:54:47.115328 ignition[1216]: no config URL provided Dec 13 01:54:47.115348 ignition[1216]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:54:47.115374 ignition[1216]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:54:47.115405 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:47.122918 ignition[1216]: PUT result: OK Dec 13 01:54:47.123086 ignition[1216]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:54:47.126705 ignition[1216]: GET result: OK Dec 13 01:54:47.126791 ignition[1216]: parsing config with SHA512: d6af6b2fb45b564c30a42a8aa7ce20306e01a64419d4fab45e5cfb74e1d85b09d0bccd89a2ef3c6e715ee8248852f0e24945245011e1945b621b7072b2805f8c Dec 13 01:54:47.133612 unknown[1216]: fetched base config from "system" Dec 13 01:54:47.133640 unknown[1216]: fetched base config from "system" Dec 13 01:54:47.135466 ignition[1216]: fetch: fetch complete Dec 13 01:54:47.133670 unknown[1216]: fetched user config from "aws" Dec 13 01:54:47.135478 ignition[1216]: fetch: fetch passed Dec 13 01:54:47.135573 ignition[1216]: Ignition finished successfully Dec 13 01:54:47.143683 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:54:47.155362 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:54:47.182458 ignition[1222]: Ignition 2.19.0 Dec 13 01:54:47.182490 ignition[1222]: Stage: kargs Dec 13 01:54:47.184117 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:47.184143 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:47.185210 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:47.193773 ignition[1222]: PUT result: OK Dec 13 01:54:47.197948 ignition[1222]: kargs: kargs passed Dec 13 01:54:47.198138 ignition[1222]: Ignition finished successfully Dec 13 01:54:47.202881 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:54:47.217745 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:54:47.241307 ignition[1228]: Ignition 2.19.0 Dec 13 01:54:47.241334 ignition[1228]: Stage: disks Dec 13 01:54:47.244117 ignition[1228]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:47.244160 ignition[1228]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:47.248003 ignition[1228]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:47.250963 ignition[1228]: PUT result: OK Dec 13 01:54:47.255254 ignition[1228]: disks: disks passed Dec 13 01:54:47.256622 ignition[1228]: Ignition finished successfully Dec 13 01:54:47.261078 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:54:47.265219 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:47.269390 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:47.271701 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:47.273626 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:47.275575 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:47.293410 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:54:47.326528 systemd-fsck[1237]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:54:47.333599 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:54:47.345207 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:54:47.442036 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:54:47.443028 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:54:47.446549 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:47.464264 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:47.473291 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:54:47.479219 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:54:47.479318 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:54:47.479367 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:47.507627 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1256) Dec 13 01:54:47.507694 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:47.507733 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:47.510219 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:47.513112 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:54:47.523771 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:54:47.533004 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:47.536302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:47.796320 initrd-setup-root[1282]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:54:47.804770 initrd-setup-root[1289]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:54:47.814344 initrd-setup-root[1296]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:54:47.832560 initrd-setup-root[1303]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:54:48.118743 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:48.133161 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:54:48.139299 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:54:48.155555 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:54:48.159148 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:48.202752 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:54:48.210125 ignition[1371]: INFO : Ignition 2.19.0 Dec 13 01:54:48.210125 ignition[1371]: INFO : Stage: mount Dec 13 01:54:48.213278 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:48.213278 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:48.217335 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:48.220574 ignition[1371]: INFO : PUT result: OK Dec 13 01:54:48.225244 ignition[1371]: INFO : mount: mount passed Dec 13 01:54:48.225244 ignition[1371]: INFO : Ignition finished successfully Dec 13 01:54:48.231733 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:54:48.240230 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:54:48.277393 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:48.300035 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1382) Dec 13 01:54:48.303850 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:48.303898 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:48.305236 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:48.309994 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:48.314329 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:48.350027 ignition[1399]: INFO : Ignition 2.19.0 Dec 13 01:54:48.350027 ignition[1399]: INFO : Stage: files Dec 13 01:54:48.353405 ignition[1399]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:48.353405 ignition[1399]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:48.353405 ignition[1399]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:48.360682 ignition[1399]: INFO : PUT result: OK Dec 13 01:54:48.365221 ignition[1399]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:54:48.392207 ignition[1399]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:54:48.392207 ignition[1399]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:54:48.398650 ignition[1399]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:54:48.401348 ignition[1399]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:54:48.403814 ignition[1399]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:54:48.402923 unknown[1399]: wrote ssh authorized keys file for user: core Dec 13 01:54:48.408653 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:54:48.408653 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:54:48.408653 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:48.418917 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:48.418917 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:48.418917 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:48.418917 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:54:48.434017 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:54:48.434017 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:54:48.434017 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:54:48.797110 systemd-networkd[1207]: eth0: Gained IPv6LL Dec 13 01:54:48.908047 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 01:54:49.257348 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:54:49.257348 ignition[1399]: INFO : files: op(8): [started] processing unit "containerd.service" Dec 13 01:54:49.266166 ignition[1399]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:54:49.266166 ignition[1399]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:54:49.266166 ignition[1399]: INFO : files: op(8): [finished] processing unit "containerd.service" Dec 13 01:54:49.266166 ignition[1399]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:49.266166 ignition[1399]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:49.266166 ignition[1399]: INFO : files: files passed Dec 13 01:54:49.266166 ignition[1399]: INFO : Ignition finished successfully Dec 13 01:54:49.289037 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:54:49.310631 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:54:49.317103 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:54:49.326460 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:54:49.328842 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:54:49.357599 initrd-setup-root-after-ignition[1428]: grep: Dec 13 01:54:49.357599 initrd-setup-root-after-ignition[1432]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:49.362875 initrd-setup-root-after-ignition[1428]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:49.362875 initrd-setup-root-after-ignition[1428]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:49.370656 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:49.374378 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:54:49.391328 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:54:49.447422 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:54:49.447802 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:54:49.455306 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:54:49.457299 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:54:49.459268 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:54:49.476047 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:54:49.505185 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:49.527420 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:54:49.552658 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:49.557557 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:49.560065 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:54:49.562194 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:54:49.562505 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:49.572090 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:54:49.574222 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:54:49.576521 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:54:49.583840 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:49.586288 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:49.589042 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:54:49.596628 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:49.600165 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:54:49.604280 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:54:49.609703 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:54:49.611313 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:54:49.611542 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:49.616794 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:49.618880 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:49.621589 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:54:49.628997 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:49.631619 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:54:49.632135 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:49.639832 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:54:49.640095 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:49.643066 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:54:49.643497 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:54:49.665614 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:54:49.671744 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:54:49.676159 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:54:49.677568 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:49.684583 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:54:49.685290 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:49.701235 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:54:49.705099 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:54:49.723540 ignition[1452]: INFO : Ignition 2.19.0 Dec 13 01:54:49.723540 ignition[1452]: INFO : Stage: umount Dec 13 01:54:49.723540 ignition[1452]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:49.723540 ignition[1452]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:49.723540 ignition[1452]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:49.734896 ignition[1452]: INFO : PUT result: OK Dec 13 01:54:49.737753 ignition[1452]: INFO : umount: umount passed Dec 13 01:54:49.739407 ignition[1452]: INFO : Ignition finished successfully Dec 13 01:54:49.743838 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:54:49.745933 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:54:49.750343 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:54:49.750436 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:54:49.758627 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:54:49.760600 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:54:49.765015 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:54:49.768139 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:54:49.769904 systemd[1]: Stopped target network.target - Network. Dec 13 01:54:49.770535 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:54:49.772020 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:49.780955 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:54:49.782537 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:54:49.789105 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:49.791527 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:54:49.796232 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:54:49.799667 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:54:49.799748 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:49.805333 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:54:49.805416 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:49.807636 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:54:49.807727 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:54:49.810032 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:54:49.810111 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:49.812881 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:54:49.817588 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:54:49.825096 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:54:49.827079 systemd-networkd[1207]: eth0: DHCPv6 lease lost Dec 13 01:54:49.827835 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:54:49.828093 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:54:49.850097 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:54:49.852380 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:54:49.858001 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:54:49.859168 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:49.873325 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:54:49.876894 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:54:49.877036 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:49.881165 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:54:49.881260 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:49.889884 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:54:49.890565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:49.895517 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:54:49.895616 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:49.898243 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:49.929516 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:54:49.933123 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:54:49.939440 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:54:49.939638 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:49.962646 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:54:49.964911 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:49.971609 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:54:49.972012 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:54:49.978294 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:54:49.978431 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:49.980541 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:54:49.980622 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:49.983187 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:54:49.983286 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:49.997114 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:54:49.997219 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:49.999360 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:49.999447 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:50.017210 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:54:50.020842 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:54:50.020955 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:50.023667 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:54:50.023750 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:50.026425 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:54:50.026504 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:50.029074 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:50.029150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:50.074438 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:54:50.074839 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:54:50.081249 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:54:50.095250 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:54:50.120480 systemd[1]: Switching root. Dec 13 01:54:50.156025 systemd-journald[251]: Journal stopped Dec 13 01:54:52.279304 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 01:54:52.279435 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:54:52.279477 kernel: SELinux: policy capability open_perms=1 Dec 13 01:54:52.279511 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:54:52.279541 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:54:52.279578 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:54:52.279609 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:54:52.279641 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:54:52.279670 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:54:52.279700 kernel: audit: type=1403 audit(1734054890.652:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:54:52.279742 systemd[1]: Successfully loaded SELinux policy in 69.231ms. Dec 13 01:54:52.279789 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.582ms. Dec 13 01:54:52.279827 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:52.279860 systemd[1]: Detected virtualization amazon. Dec 13 01:54:52.279894 systemd[1]: Detected architecture arm64. Dec 13 01:54:52.279924 systemd[1]: Detected first boot. Dec 13 01:54:52.279957 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:52.282738 zram_generator::config[1512]: No configuration found. Dec 13 01:54:52.282792 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:54:52.282826 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:54:52.282860 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:54:52.282895 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:54:52.282935 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:54:52.282969 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:54:52.283036 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:54:52.283071 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:54:52.283104 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:54:52.283139 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:54:52.283172 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:54:52.283204 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:52.283240 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:52.283271 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:54:52.283304 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:54:52.283336 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:54:52.283369 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:52.283403 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:54:52.283435 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:52.283465 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:54:52.283503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:52.283540 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:52.283572 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:52.283604 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:52.283633 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:54:52.283667 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:54:52.283696 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:52.283725 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:52.283755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:52.283784 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:52.283822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:52.283854 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:54:52.283884 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:54:52.283915 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:54:52.283945 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:54:52.286886 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:54:52.286961 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:54:52.287018 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:54:52.287064 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:54:52.287097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:52.287131 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:52.287162 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:54:52.287193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:52.287226 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:52.287258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:52.287291 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:54:52.287321 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:52.287357 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:54:52.287388 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:54:52.287420 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:54:52.287452 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:52.287481 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:52.287512 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:54:52.287541 kernel: fuse: init (API version 7.39) Dec 13 01:54:52.287572 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:54:52.287602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:52.287636 kernel: loop: module loaded Dec 13 01:54:52.287667 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:54:52.287696 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:54:52.287725 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:54:52.287754 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:54:52.287784 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:54:52.287814 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:54:52.287893 systemd-journald[1611]: Collecting audit messages is disabled. Dec 13 01:54:52.287958 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:52.295070 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:54:52.295120 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:54:52.295152 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:52.295183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:52.295219 systemd-journald[1611]: Journal started Dec 13 01:54:52.295282 systemd-journald[1611]: Runtime Journal (/run/log/journal/ec272be647ae28de39a4def1a12e2a54) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:52.295356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:52.307069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:52.307156 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:52.314937 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:54:52.316251 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:54:52.321364 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:52.321752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:52.324849 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:52.328370 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:54:52.332351 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:54:52.346015 kernel: ACPI: bus type drm_connector registered Dec 13 01:54:52.347902 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:52.351403 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:52.373709 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:54:52.381959 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:54:52.390259 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:54:52.401154 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:54:52.404211 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:54:52.417312 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:54:52.424276 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:54:52.427216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:52.440858 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:54:52.444227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:52.448755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:52.474436 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:52.483398 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:54:52.485839 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:54:52.495268 systemd-journald[1611]: Time spent on flushing to /var/log/journal/ec272be647ae28de39a4def1a12e2a54 is 87.445ms for 882 entries. Dec 13 01:54:52.495268 systemd-journald[1611]: System Journal (/var/log/journal/ec272be647ae28de39a4def1a12e2a54) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:54:52.599897 systemd-journald[1611]: Received client request to flush runtime journal. Dec 13 01:54:52.532724 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:54:52.537168 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:54:52.571940 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:52.606796 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:54:52.619379 systemd-tmpfiles[1664]: ACLs are not supported, ignoring. Dec 13 01:54:52.619418 systemd-tmpfiles[1664]: ACLs are not supported, ignoring. Dec 13 01:54:52.631564 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:52.647382 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:54:52.667767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:52.688370 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:54:52.733241 udevadm[1683]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:54:52.740188 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:54:52.751313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:52.799964 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Dec 13 01:54:52.800041 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Dec 13 01:54:52.811078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:53.482039 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:54:53.492543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:53.555578 systemd-udevd[1692]: Using default interface naming scheme 'v255'. Dec 13 01:54:53.593365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:53.606407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:53.652142 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:54:53.753236 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:54:53.780811 (udev-worker)[1700]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:53.783025 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:54:53.794202 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1695) Dec 13 01:54:53.835028 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1695) Dec 13 01:54:53.948116 systemd-networkd[1696]: lo: Link UP Dec 13 01:54:53.948139 systemd-networkd[1696]: lo: Gained carrier Dec 13 01:54:53.950660 systemd-networkd[1696]: Enumeration completed Dec 13 01:54:53.950881 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:53.956394 systemd-networkd[1696]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:53.956416 systemd-networkd[1696]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:53.962432 systemd-networkd[1696]: eth0: Link UP Dec 13 01:54:53.965187 systemd-networkd[1696]: eth0: Gained carrier Dec 13 01:54:53.965237 systemd-networkd[1696]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:53.973712 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:54:53.982741 systemd-networkd[1696]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:53.987303 systemd-networkd[1696]: eth0: DHCPv4 address 172.31.28.238/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:54.037010 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1709) Dec 13 01:54:54.075134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:54.243460 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:54:54.276333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:54.279752 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:54.305463 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:54:54.333034 lvm[1821]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:54.370817 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:54:54.374408 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:54.388259 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:54:54.398474 lvm[1824]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:54.432820 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:54:54.436830 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:54.439664 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:54:54.439852 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:54.442155 systemd[1]: Reached target machines.target - Containers. Dec 13 01:54:54.446100 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:54:54.455418 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:54:54.461463 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:54:54.463838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:54.469515 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:54:54.486346 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:54:54.494304 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:54:54.498757 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:54:54.527675 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:54:54.531377 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:54:54.543525 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 01:54:54.548420 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:54:54.626024 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:54:54.663025 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:54:54.751026 kernel: loop2: detected capacity change from 0 to 52536 Dec 13 01:54:54.790014 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:54:54.911042 kernel: loop4: detected capacity change from 0 to 194512 Dec 13 01:54:54.947023 kernel: loop5: detected capacity change from 0 to 114328 Dec 13 01:54:54.965031 kernel: loop6: detected capacity change from 0 to 52536 Dec 13 01:54:54.982052 kernel: loop7: detected capacity change from 0 to 114432 Dec 13 01:54:54.993544 (sd-merge)[1846]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:54:54.995130 (sd-merge)[1846]: Merged extensions into '/usr'. Dec 13 01:54:55.002917 systemd[1]: Reloading requested from client PID 1832 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:54:55.002949 systemd[1]: Reloading... Dec 13 01:54:55.123020 zram_generator::config[1870]: No configuration found. Dec 13 01:54:55.416159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:55.559595 systemd[1]: Reloading finished in 555 ms. Dec 13 01:54:55.587352 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:54:55.601566 systemd[1]: Starting ensure-sysext.service... Dec 13 01:54:55.617292 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:55.638826 systemd[1]: Reloading requested from client PID 1931 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:54:55.638851 systemd[1]: Reloading... Dec 13 01:54:55.667271 systemd-tmpfiles[1932]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:54:55.667942 systemd-tmpfiles[1932]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:54:55.670720 systemd-tmpfiles[1932]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:54:55.671546 systemd-tmpfiles[1932]: ACLs are not supported, ignoring. Dec 13 01:54:55.671838 systemd-tmpfiles[1932]: ACLs are not supported, ignoring. Dec 13 01:54:55.679393 systemd-tmpfiles[1932]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:54:55.679603 systemd-tmpfiles[1932]: Skipping /boot Dec 13 01:54:55.698552 ldconfig[1828]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:54:55.705327 systemd-tmpfiles[1932]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:54:55.705355 systemd-tmpfiles[1932]: Skipping /boot Dec 13 01:54:55.709182 systemd-networkd[1696]: eth0: Gained IPv6LL Dec 13 01:54:55.820030 zram_generator::config[1968]: No configuration found. Dec 13 01:54:56.051272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:56.192331 systemd[1]: Reloading finished in 552 ms. Dec 13 01:54:56.221153 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:54:56.224744 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:54:56.238410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:56.262358 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:54:56.271340 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:54:56.277799 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:54:56.301420 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:56.315276 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:54:56.338217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:56.343506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:56.364558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:56.386440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:56.392298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:56.401582 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:54:56.417900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:56.418351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:56.441601 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:54:56.448386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:56.448801 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:56.459607 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:56.466077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:56.494310 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:54:56.508386 augenrules[2060]: No rules Dec 13 01:54:56.515789 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:54:56.525043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:56.535937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:56.552501 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:56.573439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:56.602300 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:56.607392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:56.607793 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:54:56.613761 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:54:56.618902 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:56.624811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:56.628526 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:54:56.632696 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:56.633106 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:56.636687 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:56.637101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:56.655619 systemd[1]: Finished ensure-sysext.service. Dec 13 01:54:56.661199 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:56.666540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:56.682260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:56.682700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:56.682930 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:54:56.692605 systemd-resolved[2036]: Positive Trust Anchors: Dec 13 01:54:56.692650 systemd-resolved[2036]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:56.692714 systemd-resolved[2036]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:56.702014 systemd-resolved[2036]: Defaulting to hostname 'linux'. Dec 13 01:54:56.705516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:56.707948 systemd[1]: Reached target network.target - Network. Dec 13 01:54:56.709731 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:54:56.711805 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:56.713994 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:56.716128 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:54:56.718566 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:54:56.721232 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:54:56.723404 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:54:56.725685 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:54:56.728021 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:54:56.728067 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:56.729759 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:56.733633 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:54:56.738900 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:54:56.743452 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:54:56.748204 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:54:56.750387 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:56.752290 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:56.754415 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:54:56.754490 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:54:56.754537 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:54:56.759192 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:54:56.775435 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:54:56.785316 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:54:56.801086 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:54:56.807770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:54:56.811292 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:54:56.836873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:56.846479 jq[2095]: false Dec 13 01:54:56.845056 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:54:56.870297 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:54:56.891031 extend-filesystems[2096]: Found loop4 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found loop5 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found loop6 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found loop7 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found nvme0n1 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found nvme0n1p1 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found nvme0n1p2 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found nvme0n1p3 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found usr Dec 13 01:54:56.899242 extend-filesystems[2096]: Found nvme0n1p6 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found nvme0n1p7 Dec 13 01:54:56.899242 extend-filesystems[2096]: Found nvme0n1p9 Dec 13 01:54:56.899242 extend-filesystems[2096]: Checking size of /dev/nvme0n1p9 Dec 13 01:54:56.892523 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:54:56.937381 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:54:56.959730 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:54:56.963800 dbus-daemon[2093]: [system] SELinux support is enabled Dec 13 01:54:56.976954 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:54:56.979502 extend-filesystems[2096]: Resized partition /dev/nvme0n1p9 Dec 13 01:54:56.990777 dbus-daemon[2093]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1696 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:54:57.003452 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:54:57.008455 extend-filesystems[2116]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:54:57.007679 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:54:57.018549 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:54:57.018323 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:54:57.024767 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:54:57.030916 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:54:57.051497 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:54:57.052399 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:54:57.109758 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:54:57.110385 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:54:57.151475 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:54:57.164338 jq[2122]: true Dec 13 01:54:57.164759 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:54:57.164759 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:54:57.164759 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: ---------------------------------------------------- Dec 13 01:54:57.164759 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:54:57.164759 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:54:57.164759 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: corporation. Support and training for ntp-4 are Dec 13 01:54:57.164759 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: available at https://www.nwtime.org/support Dec 13 01:54:57.164759 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: ---------------------------------------------------- Dec 13 01:54:57.136182 ntpd[2103]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:54:57.161851 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:54:57.180254 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: proto: precision = 0.096 usec (-23) Dec 13 01:54:57.180254 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: basedate set to 2024-11-30 Dec 13 01:54:57.180254 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: gps base set to 2024-12-01 (week 2343) Dec 13 01:54:57.136245 ntpd[2103]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:54:57.136272 ntpd[2103]: ---------------------------------------------------- Dec 13 01:54:57.136293 ntpd[2103]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:54:57.136312 ntpd[2103]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:54:57.136337 ntpd[2103]: corporation. Support and training for ntp-4 are Dec 13 01:54:57.136357 ntpd[2103]: available at https://www.nwtime.org/support Dec 13 01:54:57.136376 ntpd[2103]: ---------------------------------------------------- Dec 13 01:54:57.171729 ntpd[2103]: proto: precision = 0.096 usec (-23) Dec 13 01:54:57.187401 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:54:57.187401 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:54:57.187401 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:54:57.187401 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: Listen normally on 3 eth0 172.31.28.238:123 Dec 13 01:54:57.187401 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: Listen normally on 4 lo [::1]:123 Dec 13 01:54:57.187401 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: Listen normally on 5 eth0 [fe80::4ee:55ff:fe78:8ed7%2]:123 Dec 13 01:54:57.187401 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: Listening on routing socket on fd #22 for interface updates Dec 13 01:54:57.173314 ntpd[2103]: basedate set to 2024-11-30 Dec 13 01:54:57.173347 ntpd[2103]: gps base set to 2024-12-01 (week 2343) Dec 13 01:54:57.185174 ntpd[2103]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:54:57.185260 ntpd[2103]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:54:57.185544 ntpd[2103]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:54:57.185614 ntpd[2103]: Listen normally on 3 eth0 172.31.28.238:123 Dec 13 01:54:57.185685 ntpd[2103]: Listen normally on 4 lo [::1]:123 Dec 13 01:54:57.185760 ntpd[2103]: Listen normally on 5 eth0 [fe80::4ee:55ff:fe78:8ed7%2]:123 Dec 13 01:54:57.185825 ntpd[2103]: Listening on routing socket on fd #22 for interface updates Dec 13 01:54:57.231438 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:57.231438 ntpd[2103]: 13 Dec 01:54:57 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:57.226121 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:57.225910 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:54:57.226176 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:57.249065 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:54:57.262663 dbus-daemon[2093]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:54:57.251721 (ntainerd)[2144]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:54:57.281796 update_engine[2120]: I20241213 01:54:57.257375 2120 main.cc:92] Flatcar Update Engine starting Dec 13 01:54:57.281796 update_engine[2120]: I20241213 01:54:57.273231 2120 update_check_scheduler.cc:74] Next update check in 9m11s Dec 13 01:54:57.271120 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:54:57.275523 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:54:57.318588 jq[2143]: true Dec 13 01:54:57.327326 extend-filesystems[2116]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:54:57.327326 extend-filesystems[2116]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:54:57.327326 extend-filesystems[2116]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:54:57.275574 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:54:57.342838 extend-filesystems[2096]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:54:57.342838 extend-filesystems[2096]: Found nvme0n1p4 Dec 13 01:54:57.304728 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:54:57.308326 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:54:57.308381 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:54:57.315535 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:54:57.333366 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:54:57.418057 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:54:57.419093 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:54:57.465717 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:54:57.484333 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:54:57.502822 coreos-metadata[2092]: Dec 13 01:54:57.502 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:54:57.515460 coreos-metadata[2092]: Dec 13 01:54:57.513 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:54:57.515909 coreos-metadata[2092]: Dec 13 01:54:57.515 INFO Fetch successful Dec 13 01:54:57.515909 coreos-metadata[2092]: Dec 13 01:54:57.515 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:54:57.527852 coreos-metadata[2092]: Dec 13 01:54:57.527 INFO Fetch successful Dec 13 01:54:57.527852 coreos-metadata[2092]: Dec 13 01:54:57.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:54:57.528955 coreos-metadata[2092]: Dec 13 01:54:57.528 INFO Fetch successful Dec 13 01:54:57.528955 coreos-metadata[2092]: Dec 13 01:54:57.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:54:57.533835 coreos-metadata[2092]: Dec 13 01:54:57.533 INFO Fetch successful Dec 13 01:54:57.533835 coreos-metadata[2092]: Dec 13 01:54:57.533 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:54:57.544475 coreos-metadata[2092]: Dec 13 01:54:57.541 INFO Fetch failed with 404: resource not found Dec 13 01:54:57.544475 coreos-metadata[2092]: Dec 13 01:54:57.541 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:54:57.544475 coreos-metadata[2092]: Dec 13 01:54:57.544 INFO Fetch successful Dec 13 01:54:57.544475 coreos-metadata[2092]: Dec 13 01:54:57.544 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:54:57.549387 coreos-metadata[2092]: Dec 13 01:54:57.546 INFO Fetch successful Dec 13 01:54:57.549387 coreos-metadata[2092]: Dec 13 01:54:57.546 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:54:57.553940 coreos-metadata[2092]: Dec 13 01:54:57.553 INFO Fetch successful Dec 13 01:54:57.553940 coreos-metadata[2092]: Dec 13 01:54:57.553 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:54:57.554576 coreos-metadata[2092]: Dec 13 01:54:57.554 INFO Fetch successful Dec 13 01:54:57.554576 coreos-metadata[2092]: Dec 13 01:54:57.554 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:54:57.560385 coreos-metadata[2092]: Dec 13 01:54:57.559 INFO Fetch successful Dec 13 01:54:57.563211 systemd-logind[2119]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:54:57.563267 systemd-logind[2119]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:54:57.563622 systemd-logind[2119]: New seat seat0. Dec 13 01:54:57.565086 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:54:57.681377 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:54:57.685222 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:54:57.689890 bash[2200]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:54:57.696555 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:54:57.712825 systemd[1]: Starting sshkeys.service... Dec 13 01:54:57.806541 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:54:57.869728 amazon-ssm-agent[2173]: Initializing new seelog logger Dec 13 01:54:57.887257 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (2215) Dec 13 01:54:57.885990 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:54:57.887420 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Dec 13 01:54:57.887420 amazon-ssm-agent[2173]: 2024/12/13 01:54:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:57.887420 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:57.887420 amazon-ssm-agent[2173]: 2024/12/13 01:54:57 processing appconfig overrides Dec 13 01:54:57.887420 amazon-ssm-agent[2173]: 2024/12/13 01:54:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:57.887420 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:57.891010 amazon-ssm-agent[2173]: 2024-12-13 01:54:57 INFO Proxy environment variables: Dec 13 01:54:57.892954 amazon-ssm-agent[2173]: 2024/12/13 01:54:57 processing appconfig overrides Dec 13 01:54:57.897617 amazon-ssm-agent[2173]: 2024/12/13 01:54:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:57.897617 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:57.897894 amazon-ssm-agent[2173]: 2024/12/13 01:54:57 processing appconfig overrides Dec 13 01:54:57.933083 amazon-ssm-agent[2173]: 2024/12/13 01:54:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:57.933083 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:54:57.933083 amazon-ssm-agent[2173]: 2024/12/13 01:54:57 processing appconfig overrides Dec 13 01:54:57.992693 amazon-ssm-agent[2173]: 2024-12-13 01:54:57 INFO no_proxy: Dec 13 01:54:58.073655 containerd[2144]: time="2024-12-13T01:54:58.073517637Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:54:58.093074 amazon-ssm-agent[2173]: 2024-12-13 01:54:57 INFO https_proxy: Dec 13 01:54:58.145020 containerd[2144]: time="2024-12-13T01:54:58.144577593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:58.155083 containerd[2144]: time="2024-12-13T01:54:58.154047165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:58.155083 containerd[2144]: time="2024-12-13T01:54:58.154132881Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:54:58.155083 containerd[2144]: time="2024-12-13T01:54:58.154170597Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:54:58.155083 containerd[2144]: time="2024-12-13T01:54:58.154490397Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:54:58.155083 containerd[2144]: time="2024-12-13T01:54:58.154525893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:58.155083 containerd[2144]: time="2024-12-13T01:54:58.154644069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:58.155083 containerd[2144]: time="2024-12-13T01:54:58.154674009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:58.155762 containerd[2144]: time="2024-12-13T01:54:58.155697297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:58.155894 containerd[2144]: time="2024-12-13T01:54:58.155864121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:58.156075 containerd[2144]: time="2024-12-13T01:54:58.156037881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:58.156229 containerd[2144]: time="2024-12-13T01:54:58.156198681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:58.158642 containerd[2144]: time="2024-12-13T01:54:58.158305761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:58.160674 containerd[2144]: time="2024-12-13T01:54:58.160601937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:54:58.163553 containerd[2144]: time="2024-12-13T01:54:58.163175949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:54:58.163553 containerd[2144]: time="2024-12-13T01:54:58.163259373Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:54:58.163856 containerd[2144]: time="2024-12-13T01:54:58.163793001Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:54:58.165495 containerd[2144]: time="2024-12-13T01:54:58.165239037Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:54:58.187411 containerd[2144]: time="2024-12-13T01:54:58.186082161Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:54:58.187411 containerd[2144]: time="2024-12-13T01:54:58.186323001Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:54:58.187411 containerd[2144]: time="2024-12-13T01:54:58.186476841Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:54:58.187411 containerd[2144]: time="2024-12-13T01:54:58.186608529Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:54:58.187411 containerd[2144]: time="2024-12-13T01:54:58.186773205Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:54:58.187411 containerd[2144]: time="2024-12-13T01:54:58.187203777Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.191867841Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.192409569Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.192487977Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.192567081Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.192608409Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.192674973Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.192735261Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.192783177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.193778637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:54:58.194006 containerd[2144]: time="2024-12-13T01:54:58.193925841Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:54:58.194712 containerd[2144]: time="2024-12-13T01:54:58.193962345Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:54:58.194712 containerd[2144]: time="2024-12-13T01:54:58.194583273Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:54:58.194712 containerd[2144]: time="2024-12-13T01:54:58.194670309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.194939 containerd[2144]: time="2024-12-13T01:54:58.194906949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.195129 containerd[2144]: time="2024-12-13T01:54:58.195097977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.197049 containerd[2144]: time="2024-12-13T01:54:58.196273413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.197049 containerd[2144]: time="2024-12-13T01:54:58.196767501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.198669 containerd[2144]: time="2024-12-13T01:54:58.197408601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.198669 containerd[2144]: time="2024-12-13T01:54:58.198313977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.198669 containerd[2144]: time="2024-12-13T01:54:58.198394089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.198669 containerd[2144]: time="2024-12-13T01:54:58.198454641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.200685 amazon-ssm-agent[2173]: 2024-12-13 01:54:57 INFO http_proxy: Dec 13 01:54:58.200858 containerd[2144]: time="2024-12-13T01:54:58.199033893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.200858 containerd[2144]: time="2024-12-13T01:54:58.199115901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.200858 containerd[2144]: time="2024-12-13T01:54:58.199683033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.200858 containerd[2144]: time="2024-12-13T01:54:58.199778061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.200858 containerd[2144]: time="2024-12-13T01:54:58.200104317Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:54:58.202481 containerd[2144]: time="2024-12-13T01:54:58.200547501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.202481 containerd[2144]: time="2024-12-13T01:54:58.201174633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.202481 containerd[2144]: time="2024-12-13T01:54:58.201241053Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:54:58.203589 containerd[2144]: time="2024-12-13T01:54:58.203065629Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:54:58.203589 containerd[2144]: time="2024-12-13T01:54:58.203166201Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:54:58.203589 containerd[2144]: time="2024-12-13T01:54:58.203197089Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:54:58.203589 containerd[2144]: time="2024-12-13T01:54:58.203255277Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:54:58.203589 containerd[2144]: time="2024-12-13T01:54:58.203285553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.203589 containerd[2144]: time="2024-12-13T01:54:58.203541082Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:54:58.205412 containerd[2144]: time="2024-12-13T01:54:58.204053338Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:54:58.205412 containerd[2144]: time="2024-12-13T01:54:58.204137110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:54:58.206455 containerd[2144]: time="2024-12-13T01:54:58.206304982Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:54:58.211275 containerd[2144]: time="2024-12-13T01:54:58.209327242Z" level=info msg="Connect containerd service" Dec 13 01:54:58.211275 containerd[2144]: time="2024-12-13T01:54:58.209444110Z" level=info msg="using legacy CRI server" Dec 13 01:54:58.211275 containerd[2144]: time="2024-12-13T01:54:58.209463298Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:54:58.211275 containerd[2144]: time="2024-12-13T01:54:58.209627362Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:54:58.211275 containerd[2144]: time="2024-12-13T01:54:58.210810946Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:54:58.214441 containerd[2144]: time="2024-12-13T01:54:58.212958430Z" level=info msg="Start subscribing containerd event" Dec 13 01:54:58.214441 containerd[2144]: time="2024-12-13T01:54:58.213096214Z" level=info msg="Start recovering state" Dec 13 01:54:58.214441 containerd[2144]: time="2024-12-13T01:54:58.213229042Z" level=info msg="Start event monitor" Dec 13 01:54:58.214441 containerd[2144]: time="2024-12-13T01:54:58.213253666Z" level=info msg="Start snapshots syncer" Dec 13 01:54:58.214441 containerd[2144]: time="2024-12-13T01:54:58.213275446Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:54:58.214441 containerd[2144]: time="2024-12-13T01:54:58.213295726Z" level=info msg="Start streaming server" Dec 13 01:54:58.217033 containerd[2144]: time="2024-12-13T01:54:58.216951022Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:54:58.217298 containerd[2144]: time="2024-12-13T01:54:58.217264330Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:54:58.220639 containerd[2144]: time="2024-12-13T01:54:58.218327866Z" level=info msg="containerd successfully booted in 0.146379s" Dec 13 01:54:58.218500 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:54:58.223322 locksmithd[2162]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:54:58.248754 coreos-metadata[2222]: Dec 13 01:54:58.248 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:54:58.258153 coreos-metadata[2222]: Dec 13 01:54:58.253 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:54:58.258153 coreos-metadata[2222]: Dec 13 01:54:58.257 INFO Fetch successful Dec 13 01:54:58.258153 coreos-metadata[2222]: Dec 13 01:54:58.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:54:58.261078 coreos-metadata[2222]: Dec 13 01:54:58.260 INFO Fetch successful Dec 13 01:54:58.270387 unknown[2222]: wrote ssh authorized keys file for user: core Dec 13 01:54:58.271630 dbus-daemon[2093]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:54:58.271909 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:54:58.289947 dbus-daemon[2093]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2159 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:54:58.303011 amazon-ssm-agent[2173]: 2024-12-13 01:54:57 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:54:58.320656 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:54:58.351219 update-ssh-keys[2298]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:54:58.358383 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:54:58.381712 systemd[1]: Finished sshkeys.service. Dec 13 01:54:58.403015 amazon-ssm-agent[2173]: 2024-12-13 01:54:57 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:54:58.429615 polkitd[2300]: Started polkitd version 121 Dec 13 01:54:58.498467 polkitd[2300]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:54:58.500134 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO Agent will take identity from EC2 Dec 13 01:54:58.498622 polkitd[2300]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:54:58.508060 polkitd[2300]: Finished loading, compiling and executing 2 rules Dec 13 01:54:58.512817 dbus-daemon[2093]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:54:58.513112 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:54:58.520097 polkitd[2300]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:54:58.587526 systemd-resolved[2036]: System hostname changed to 'ip-172-31-28-238'. Dec 13 01:54:58.587661 systemd-hostnamed[2159]: Hostname set to <ip-172-31-28-238> (transient) Dec 13 01:54:58.601184 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:54:58.700097 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:54:58.800065 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:54:58.901019 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:54:58.998734 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:54:59.098345 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:54:59.158940 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:54:59.158940 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO [Registrar] Starting registrar module Dec 13 01:54:59.158940 amazon-ssm-agent[2173]: 2024-12-13 01:54:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:54:59.159195 amazon-ssm-agent[2173]: 2024-12-13 01:54:59 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:54:59.159195 amazon-ssm-agent[2173]: 2024-12-13 01:54:59 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:54:59.159195 amazon-ssm-agent[2173]: 2024-12-13 01:54:59 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:54:59.159195 amazon-ssm-agent[2173]: 2024-12-13 01:54:59 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:54:59.198388 amazon-ssm-agent[2173]: 2024-12-13 01:54:59 INFO [CredentialRefresher] Next credential rotation will be in 32.24164176346667 minutes Dec 13 01:54:59.803245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:59.807604 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:54:59.907364 sshd_keygen[2153]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:54:59.952032 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:54:59.962530 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:54:59.985715 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:54:59.988315 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:54:59.998496 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:55:00.033613 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:55:00.046723 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:55:00.051534 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:55:00.056353 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:55:00.058730 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:55:00.062176 systemd[1]: Startup finished in 8.986s (kernel) + 9.477s (userspace) = 18.464s. Dec 13 01:55:00.191183 amazon-ssm-agent[2173]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:55:00.292136 amazon-ssm-agent[2173]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2385) started Dec 13 01:55:00.392944 amazon-ssm-agent[2173]: 2024-12-13 01:55:00 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:55:01.162207 kubelet[2356]: E1213 01:55:01.162104 2356 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:01.168218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:01.168634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:04.440493 systemd-resolved[2036]: Clock change detected. Flushing caches. Dec 13 01:55:05.351173 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:55:05.359895 systemd[1]: Started sshd@0-172.31.28.238:22-139.178.68.195:36142.service - OpenSSH per-connection server daemon (139.178.68.195:36142). Dec 13 01:55:05.569460 sshd[2400]: Accepted publickey for core from 139.178.68.195 port 36142 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:05.572963 sshd[2400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:05.592293 systemd-logind[2119]: New session 1 of user core. Dec 13 01:55:05.593261 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:55:05.599862 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:55:05.635935 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:55:05.648066 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:55:05.668041 (systemd)[2406]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:05.879471 systemd[2406]: Queued start job for default target default.target. Dec 13 01:55:05.881455 systemd[2406]: Created slice app.slice - User Application Slice. Dec 13 01:55:05.881671 systemd[2406]: Reached target paths.target - Paths. Dec 13 01:55:05.881707 systemd[2406]: Reached target timers.target - Timers. Dec 13 01:55:05.887603 systemd[2406]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:55:05.913645 systemd[2406]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:55:05.913765 systemd[2406]: Reached target sockets.target - Sockets. Dec 13 01:55:05.913797 systemd[2406]: Reached target basic.target - Basic System. Dec 13 01:55:05.913884 systemd[2406]: Reached target default.target - Main User Target. Dec 13 01:55:05.913946 systemd[2406]: Startup finished in 234ms. Dec 13 01:55:05.914239 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:55:05.926217 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:55:06.074266 systemd[1]: Started sshd@1-172.31.28.238:22-139.178.68.195:57122.service - OpenSSH per-connection server daemon (139.178.68.195:57122). Dec 13 01:55:06.245925 sshd[2418]: Accepted publickey for core from 139.178.68.195 port 57122 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:06.248554 sshd[2418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:06.256910 systemd-logind[2119]: New session 2 of user core. Dec 13 01:55:06.266985 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:55:06.395281 sshd[2418]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:06.400927 systemd[1]: sshd@1-172.31.28.238:22-139.178.68.195:57122.service: Deactivated successfully. Dec 13 01:55:06.407318 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:55:06.409176 systemd-logind[2119]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:55:06.411212 systemd-logind[2119]: Removed session 2. Dec 13 01:55:06.429823 systemd[1]: Started sshd@2-172.31.28.238:22-139.178.68.195:57132.service - OpenSSH per-connection server daemon (139.178.68.195:57132). Dec 13 01:55:06.593399 sshd[2426]: Accepted publickey for core from 139.178.68.195 port 57132 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:06.595984 sshd[2426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:06.603330 systemd-logind[2119]: New session 3 of user core. Dec 13 01:55:06.615872 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:55:06.735849 sshd[2426]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:06.741832 systemd[1]: sshd@2-172.31.28.238:22-139.178.68.195:57132.service: Deactivated successfully. Dec 13 01:55:06.741915 systemd-logind[2119]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:55:06.750132 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:55:06.751493 systemd-logind[2119]: Removed session 3. Dec 13 01:55:06.767893 systemd[1]: Started sshd@3-172.31.28.238:22-139.178.68.195:57148.service - OpenSSH per-connection server daemon (139.178.68.195:57148). Dec 13 01:55:06.934623 sshd[2434]: Accepted publickey for core from 139.178.68.195 port 57148 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:06.937107 sshd[2434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:06.944472 systemd-logind[2119]: New session 4 of user core. Dec 13 01:55:06.953836 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:55:07.082720 sshd[2434]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:07.090224 systemd[1]: sshd@3-172.31.28.238:22-139.178.68.195:57148.service: Deactivated successfully. Dec 13 01:55:07.095110 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:55:07.096162 systemd-logind[2119]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:55:07.098507 systemd-logind[2119]: Removed session 4. Dec 13 01:55:07.112864 systemd[1]: Started sshd@4-172.31.28.238:22-139.178.68.195:57150.service - OpenSSH per-connection server daemon (139.178.68.195:57150). Dec 13 01:55:07.284109 sshd[2442]: Accepted publickey for core from 139.178.68.195 port 57150 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:07.286640 sshd[2442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:07.294911 systemd-logind[2119]: New session 5 of user core. Dec 13 01:55:07.306988 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:55:07.452126 sudo[2446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:55:07.452803 sudo[2446]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:07.471421 sudo[2446]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:07.495418 sshd[2442]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:07.502782 systemd[1]: sshd@4-172.31.28.238:22-139.178.68.195:57150.service: Deactivated successfully. Dec 13 01:55:07.509066 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:55:07.510553 systemd-logind[2119]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:55:07.512502 systemd-logind[2119]: Removed session 5. Dec 13 01:55:07.528868 systemd[1]: Started sshd@5-172.31.28.238:22-139.178.68.195:57166.service - OpenSSH per-connection server daemon (139.178.68.195:57166). Dec 13 01:55:07.698597 sshd[2451]: Accepted publickey for core from 139.178.68.195 port 57166 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:07.701904 sshd[2451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:07.709244 systemd-logind[2119]: New session 6 of user core. Dec 13 01:55:07.718954 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:55:07.824352 sudo[2456]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:55:07.825524 sudo[2456]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:07.831655 sudo[2456]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:07.841586 sudo[2455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:55:07.842185 sudo[2455]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:07.866845 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:07.870682 auditctl[2459]: No rules Dec 13 01:55:07.872784 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:55:07.873339 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:07.888976 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:07.929172 augenrules[2478]: No rules Dec 13 01:55:07.932318 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:07.935674 sudo[2455]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:07.959844 sshd[2451]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:07.966092 systemd[1]: sshd@5-172.31.28.238:22-139.178.68.195:57166.service: Deactivated successfully. Dec 13 01:55:07.970807 systemd-logind[2119]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:55:07.973626 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:55:07.976517 systemd-logind[2119]: Removed session 6. Dec 13 01:55:07.987850 systemd[1]: Started sshd@6-172.31.28.238:22-139.178.68.195:57174.service - OpenSSH per-connection server daemon (139.178.68.195:57174). Dec 13 01:55:08.167908 sshd[2487]: Accepted publickey for core from 139.178.68.195 port 57174 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:08.170460 sshd[2487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:08.179726 systemd-logind[2119]: New session 7 of user core. Dec 13 01:55:08.184885 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:55:08.292690 sudo[2491]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:55:08.293291 sudo[2491]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:09.382878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:09.392859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:09.440453 systemd[1]: Reloading requested from client PID 2529 ('systemctl') (unit session-7.scope)... Dec 13 01:55:09.440628 systemd[1]: Reloading... Dec 13 01:55:09.661425 zram_generator::config[2572]: No configuration found. Dec 13 01:55:09.912284 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:10.069951 systemd[1]: Reloading finished in 628 ms. Dec 13 01:55:10.152287 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:55:10.152742 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:55:10.153567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:10.169186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:10.430722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:10.449963 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:10.527959 kubelet[2642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:10.527959 kubelet[2642]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:10.528543 kubelet[2642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:10.528543 kubelet[2642]: I1213 01:55:10.528092 2642 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:11.422423 kubelet[2642]: I1213 01:55:11.422252 2642 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:55:11.422423 kubelet[2642]: I1213 01:55:11.422310 2642 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:11.422737 kubelet[2642]: I1213 01:55:11.422691 2642 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:55:11.456090 kubelet[2642]: I1213 01:55:11.455678 2642 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:11.474447 kubelet[2642]: I1213 01:55:11.473849 2642 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:11.474956 kubelet[2642]: I1213 01:55:11.474921 2642 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:11.475676 kubelet[2642]: I1213 01:55:11.475636 2642 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:55:11.475941 kubelet[2642]: I1213 01:55:11.475919 2642 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:11.476157 kubelet[2642]: I1213 01:55:11.476134 2642 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:55:11.478889 kubelet[2642]: I1213 01:55:11.478831 2642 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:11.484279 kubelet[2642]: I1213 01:55:11.484218 2642 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:55:11.484279 kubelet[2642]: I1213 01:55:11.484274 2642 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:11.484505 kubelet[2642]: I1213 01:55:11.484329 2642 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:55:11.484505 kubelet[2642]: I1213 01:55:11.484362 2642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:11.485434 kubelet[2642]: E1213 01:55:11.485154 2642 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:11.485934 kubelet[2642]: E1213 01:55:11.485891 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:11.488213 kubelet[2642]: I1213 01:55:11.488172 2642 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:11.488812 kubelet[2642]: I1213 01:55:11.488767 2642 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:11.490918 kubelet[2642]: W1213 01:55:11.490871 2642 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:55:11.492414 kubelet[2642]: I1213 01:55:11.492209 2642 server.go:1256] "Started kubelet" Dec 13 01:55:11.496673 kubelet[2642]: I1213 01:55:11.496620 2642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:11.507445 kubelet[2642]: I1213 01:55:11.506533 2642 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:11.508278 kubelet[2642]: I1213 01:55:11.508248 2642 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:55:11.510211 kubelet[2642]: I1213 01:55:11.510168 2642 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:11.510693 kubelet[2642]: I1213 01:55:11.510667 2642 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:11.512693 kubelet[2642]: I1213 01:55:11.512635 2642 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:55:11.513551 kubelet[2642]: I1213 01:55:11.513511 2642 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:55:11.513655 kubelet[2642]: I1213 01:55:11.513630 2642 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:55:11.517717 kubelet[2642]: E1213 01:55:11.517677 2642 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.28.238\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 01:55:11.517957 kubelet[2642]: W1213 01:55:11.517912 2642 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:55:11.518074 kubelet[2642]: E1213 01:55:11.518055 2642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:55:11.519277 kubelet[2642]: I1213 01:55:11.519227 2642 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:11.520209 kubelet[2642]: I1213 01:55:11.520158 2642 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:11.522367 kubelet[2642]: E1213 01:55:11.522320 2642 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.238.181099c12c82f51f default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.238,UID:172.31.28.238,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.28.238,},FirstTimestamp:2024-12-13 01:55:11.492158751 +0000 UTC m=+1.035369559,LastTimestamp:2024-12-13 01:55:11.492158751 +0000 UTC m=+1.035369559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.238,}" Dec 13 01:55:11.522757 kubelet[2642]: W1213 01:55:11.522729 2642 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.28.238" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:55:11.522914 kubelet[2642]: E1213 01:55:11.522893 2642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.28.238" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:55:11.523207 kubelet[2642]: W1213 01:55:11.523183 2642 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:55:11.523324 kubelet[2642]: E1213 01:55:11.523306 2642 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:55:11.525124 kubelet[2642]: E1213 01:55:11.525087 2642 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:11.527054 kubelet[2642]: E1213 01:55:11.527001 2642 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.238.181099c12e78fd43 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.238,UID:172.31.28.238,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.28.238,},FirstTimestamp:2024-12-13 01:55:11.525059907 +0000 UTC m=+1.068270715,LastTimestamp:2024-12-13 01:55:11.525059907 +0000 UTC m=+1.068270715,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.238,}" Dec 13 01:55:11.530430 kubelet[2642]: I1213 01:55:11.529201 2642 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:11.582706 kubelet[2642]: I1213 01:55:11.582665 2642 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:11.582896 kubelet[2642]: I1213 01:55:11.582875 2642 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:11.583070 kubelet[2642]: I1213 01:55:11.582987 2642 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:11.587710 kubelet[2642]: I1213 01:55:11.587477 2642 policy_none.go:49] "None policy: Start" Dec 13 01:55:11.588915 kubelet[2642]: I1213 01:55:11.588867 2642 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:11.589062 kubelet[2642]: I1213 01:55:11.588966 2642 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:11.602203 kubelet[2642]: I1213 01:55:11.599422 2642 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:11.602203 kubelet[2642]: I1213 01:55:11.599834 2642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:11.626579 kubelet[2642]: I1213 01:55:11.626538 2642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:11.630419 kubelet[2642]: I1213 01:55:11.630139 2642 kubelet_node_status.go:73] "Attempting to register node" node="172.31.28.238" Dec 13 01:55:11.631058 kubelet[2642]: I1213 01:55:11.631024 2642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:11.631202 kubelet[2642]: I1213 01:55:11.631184 2642 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:11.631369 kubelet[2642]: I1213 01:55:11.631347 2642 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:55:11.631940 kubelet[2642]: E1213 01:55:11.631618 2642 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 01:55:11.637034 kubelet[2642]: E1213 01:55:11.636994 2642 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.28.238\" not found" Dec 13 01:55:11.645619 kubelet[2642]: I1213 01:55:11.645467 2642 kubelet_node_status.go:76] "Successfully registered node" node="172.31.28.238" Dec 13 01:55:11.670657 kubelet[2642]: E1213 01:55:11.670612 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:11.771504 kubelet[2642]: E1213 01:55:11.771442 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:11.872005 kubelet[2642]: E1213 01:55:11.871963 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:11.972775 kubelet[2642]: E1213 01:55:11.972740 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:12.073494 kubelet[2642]: E1213 01:55:12.073354 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:12.173995 kubelet[2642]: E1213 01:55:12.173943 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:12.274563 kubelet[2642]: E1213 01:55:12.274514 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:12.375190 kubelet[2642]: E1213 01:55:12.375077 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:12.425519 kubelet[2642]: I1213 01:55:12.425429 2642 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:55:12.425682 kubelet[2642]: W1213 01:55:12.425645 2642 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:55:12.475996 kubelet[2642]: E1213 01:55:12.475936 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:12.486225 kubelet[2642]: E1213 01:55:12.486187 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:12.576538 kubelet[2642]: E1213 01:55:12.576477 2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.28.238\" not found" Dec 13 01:55:12.678406 kubelet[2642]: I1213 01:55:12.678109 2642 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:55:12.679161 containerd[2144]: time="2024-12-13T01:55:12.678831508Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:55:12.679802 kubelet[2642]: I1213 01:55:12.679311 2642 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:55:12.960691 sudo[2491]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:12.985498 sshd[2487]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:12.993499 systemd[1]: sshd@6-172.31.28.238:22-139.178.68.195:57174.service: Deactivated successfully. Dec 13 01:55:13.001548 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:55:13.003218 systemd-logind[2119]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:55:13.005123 systemd-logind[2119]: Removed session 7. Dec 13 01:55:13.487432 kubelet[2642]: E1213 01:55:13.487243 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:13.487432 kubelet[2642]: I1213 01:55:13.487253 2642 apiserver.go:52] "Watching apiserver" Dec 13 01:55:13.494067 kubelet[2642]: I1213 01:55:13.494025 2642 topology_manager.go:215] "Topology Admit Handler" podUID="5e51dc3a-bab0-4a41-bb37-4a5f139f010d" podNamespace="calico-system" podName="calico-node-9wb5b" Dec 13 01:55:13.496125 kubelet[2642]: I1213 01:55:13.495441 2642 topology_manager.go:215] "Topology Admit Handler" podUID="c7cde905-d050-47e3-b9a7-34a19a0f3e58" podNamespace="calico-system" podName="csi-node-driver-hqj6z" Dec 13 01:55:13.496125 kubelet[2642]: I1213 01:55:13.495681 2642 topology_manager.go:215] "Topology Admit Handler" podUID="03d0aeb1-5fe8-48dc-a2bd-ec73648b3425" podNamespace="kube-system" podName="kube-proxy-hk2w8" Dec 13 01:55:13.496597 kubelet[2642]: E1213 01:55:13.496533 2642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqj6z" podUID="c7cde905-d050-47e3-b9a7-34a19a0f3e58" Dec 13 01:55:13.514256 kubelet[2642]: I1213 01:55:13.514146 2642 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:55:13.532460 kubelet[2642]: I1213 01:55:13.532304 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-tigera-ca-bundle\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.532460 kubelet[2642]: I1213 01:55:13.532409 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-cni-net-dir\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.532460 kubelet[2642]: I1213 01:55:13.532469 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqvwv\" (UniqueName: \"kubernetes.io/projected/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-kube-api-access-jqvwv\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.532798 kubelet[2642]: I1213 01:55:13.532526 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zdq5\" (UniqueName: \"kubernetes.io/projected/c7cde905-d050-47e3-b9a7-34a19a0f3e58-kube-api-access-5zdq5\") pod \"csi-node-driver-hqj6z\" (UID: \"c7cde905-d050-47e3-b9a7-34a19a0f3e58\") " pod="calico-system/csi-node-driver-hqj6z" Dec 13 01:55:13.532798 kubelet[2642]: I1213 01:55:13.532574 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03d0aeb1-5fe8-48dc-a2bd-ec73648b3425-kube-proxy\") pod \"kube-proxy-hk2w8\" (UID: \"03d0aeb1-5fe8-48dc-a2bd-ec73648b3425\") " pod="kube-system/kube-proxy-hk2w8" Dec 13 01:55:13.532798 kubelet[2642]: I1213 01:55:13.532625 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pp6j\" (UniqueName: \"kubernetes.io/projected/03d0aeb1-5fe8-48dc-a2bd-ec73648b3425-kube-api-access-5pp6j\") pod \"kube-proxy-hk2w8\" (UID: \"03d0aeb1-5fe8-48dc-a2bd-ec73648b3425\") " pod="kube-system/kube-proxy-hk2w8" Dec 13 01:55:13.532798 kubelet[2642]: I1213 01:55:13.532677 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-lib-modules\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.532798 kubelet[2642]: I1213 01:55:13.532724 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-xtables-lock\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.533073 kubelet[2642]: I1213 01:55:13.532770 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-policysync\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.533073 kubelet[2642]: I1213 01:55:13.532816 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-node-certs\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.533073 kubelet[2642]: I1213 01:55:13.532874 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7cde905-d050-47e3-b9a7-34a19a0f3e58-kubelet-dir\") pod \"csi-node-driver-hqj6z\" (UID: \"c7cde905-d050-47e3-b9a7-34a19a0f3e58\") " pod="calico-system/csi-node-driver-hqj6z" Dec 13 01:55:13.533073 kubelet[2642]: I1213 01:55:13.532924 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7cde905-d050-47e3-b9a7-34a19a0f3e58-socket-dir\") pod \"csi-node-driver-hqj6z\" (UID: \"c7cde905-d050-47e3-b9a7-34a19a0f3e58\") " pod="calico-system/csi-node-driver-hqj6z" Dec 13 01:55:13.533073 kubelet[2642]: I1213 01:55:13.532969 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-cni-bin-dir\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.533325 kubelet[2642]: I1213 01:55:13.533012 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-cni-log-dir\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.533325 kubelet[2642]: I1213 01:55:13.533060 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-flexvol-driver-host\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.533325 kubelet[2642]: I1213 01:55:13.533109 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03d0aeb1-5fe8-48dc-a2bd-ec73648b3425-xtables-lock\") pod \"kube-proxy-hk2w8\" (UID: \"03d0aeb1-5fe8-48dc-a2bd-ec73648b3425\") " pod="kube-system/kube-proxy-hk2w8" Dec 13 01:55:13.533325 kubelet[2642]: I1213 01:55:13.533197 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03d0aeb1-5fe8-48dc-a2bd-ec73648b3425-lib-modules\") pod \"kube-proxy-hk2w8\" (UID: \"03d0aeb1-5fe8-48dc-a2bd-ec73648b3425\") " pod="kube-system/kube-proxy-hk2w8" Dec 13 01:55:13.533325 kubelet[2642]: I1213 01:55:13.533247 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-var-run-calico\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.533641 kubelet[2642]: I1213 01:55:13.533300 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e51dc3a-bab0-4a41-bb37-4a5f139f010d-var-lib-calico\") pod \"calico-node-9wb5b\" (UID: \"5e51dc3a-bab0-4a41-bb37-4a5f139f010d\") " pod="calico-system/calico-node-9wb5b" Dec 13 01:55:13.533641 kubelet[2642]: I1213 01:55:13.533357 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c7cde905-d050-47e3-b9a7-34a19a0f3e58-varrun\") pod \"csi-node-driver-hqj6z\" (UID: \"c7cde905-d050-47e3-b9a7-34a19a0f3e58\") " pod="calico-system/csi-node-driver-hqj6z" Dec 13 01:55:13.533641 kubelet[2642]: I1213 01:55:13.533442 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7cde905-d050-47e3-b9a7-34a19a0f3e58-registration-dir\") pod \"csi-node-driver-hqj6z\" (UID: \"c7cde905-d050-47e3-b9a7-34a19a0f3e58\") " pod="calico-system/csi-node-driver-hqj6z" Dec 13 01:55:13.641790 kubelet[2642]: E1213 01:55:13.641715 2642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:13.645023 kubelet[2642]: W1213 01:55:13.641756 2642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:13.645023 kubelet[2642]: E1213 01:55:13.644584 2642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:13.650017 kubelet[2642]: E1213 01:55:13.649716 2642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:13.650017 kubelet[2642]: W1213 01:55:13.649752 2642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:13.650017 kubelet[2642]: E1213 01:55:13.649811 2642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:13.651156 kubelet[2642]: E1213 01:55:13.651106 2642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:13.651156 kubelet[2642]: W1213 01:55:13.651146 2642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:13.651314 kubelet[2642]: E1213 01:55:13.651187 2642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:13.670530 kubelet[2642]: E1213 01:55:13.670489 2642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:13.672443 kubelet[2642]: W1213 01:55:13.670876 2642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:13.672443 kubelet[2642]: E1213 01:55:13.670931 2642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:13.680805 kubelet[2642]: E1213 01:55:13.680767 2642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:13.681021 kubelet[2642]: W1213 01:55:13.680990 2642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:13.681606 kubelet[2642]: E1213 01:55:13.681173 2642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:13.683757 kubelet[2642]: E1213 01:55:13.683698 2642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:55:13.683883 kubelet[2642]: W1213 01:55:13.683764 2642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:55:13.683883 kubelet[2642]: E1213 01:55:13.683809 2642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:55:13.803512 containerd[2144]: time="2024-12-13T01:55:13.803412666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9wb5b,Uid:5e51dc3a-bab0-4a41-bb37-4a5f139f010d,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:13.807026 containerd[2144]: time="2024-12-13T01:55:13.806729874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk2w8,Uid:03d0aeb1-5fe8-48dc-a2bd-ec73648b3425,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:14.418481 containerd[2144]: time="2024-12-13T01:55:14.417633941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:14.419740 containerd[2144]: time="2024-12-13T01:55:14.419673485Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:14.422062 containerd[2144]: time="2024-12-13T01:55:14.421796345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:55:14.425093 containerd[2144]: time="2024-12-13T01:55:14.424735337Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:14.427014 containerd[2144]: time="2024-12-13T01:55:14.426843845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:14.432818 containerd[2144]: time="2024-12-13T01:55:14.432696509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:14.439864 containerd[2144]: time="2024-12-13T01:55:14.439717673Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 636.173907ms" Dec 13 01:55:14.445847 containerd[2144]: time="2024-12-13T01:55:14.445413449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 638.526891ms" Dec 13 01:55:14.488283 kubelet[2642]: E1213 01:55:14.488143 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:14.639428 containerd[2144]: time="2024-12-13T01:55:14.638911590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:14.639428 containerd[2144]: time="2024-12-13T01:55:14.639047934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:14.639428 containerd[2144]: time="2024-12-13T01:55:14.639087738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:14.642250 containerd[2144]: time="2024-12-13T01:55:14.641983746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:14.651460 containerd[2144]: time="2024-12-13T01:55:14.651171678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:14.651460 containerd[2144]: time="2024-12-13T01:55:14.651308754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:14.651460 containerd[2144]: time="2024-12-13T01:55:14.651347898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:14.655804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316836710.mount: Deactivated successfully. Dec 13 01:55:14.660490 containerd[2144]: time="2024-12-13T01:55:14.653693286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:14.856029 containerd[2144]: time="2024-12-13T01:55:14.855898639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9wb5b,Uid:5e51dc3a-bab0-4a41-bb37-4a5f139f010d,Namespace:calico-system,Attempt:0,} returns sandbox id \"26b03295e92a287d6258ccb3472619662ce3ebe388d06065ea43429448d8b75f\"" Dec 13 01:55:14.865990 containerd[2144]: time="2024-12-13T01:55:14.865355851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk2w8,Uid:03d0aeb1-5fe8-48dc-a2bd-ec73648b3425,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f4d0c778c45d4e43889bd17573d50bab49ca762fbc28475b3d98fbd79957d9d\"" Dec 13 01:55:14.868529 containerd[2144]: time="2024-12-13T01:55:14.868222687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:55:15.489329 kubelet[2642]: E1213 01:55:15.489269 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:15.633067 kubelet[2642]: E1213 01:55:15.632581 2642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqj6z" podUID="c7cde905-d050-47e3-b9a7-34a19a0f3e58" Dec 13 01:55:16.061656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514966880.mount: Deactivated successfully. Dec 13 01:55:16.210053 containerd[2144]: time="2024-12-13T01:55:16.209968878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:16.212019 containerd[2144]: time="2024-12-13T01:55:16.211950354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Dec 13 01:55:16.214422 containerd[2144]: time="2024-12-13T01:55:16.214325082Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:16.218707 containerd[2144]: time="2024-12-13T01:55:16.218656182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:16.220222 containerd[2144]: time="2024-12-13T01:55:16.219974010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.351681363s" Dec 13 01:55:16.220222 containerd[2144]: time="2024-12-13T01:55:16.220035882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:55:16.221534 containerd[2144]: time="2024-12-13T01:55:16.221193294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:55:16.224317 containerd[2144]: time="2024-12-13T01:55:16.223726470Z" level=info msg="CreateContainer within sandbox \"26b03295e92a287d6258ccb3472619662ce3ebe388d06065ea43429448d8b75f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:55:16.255897 containerd[2144]: time="2024-12-13T01:55:16.255812694Z" level=info msg="CreateContainer within sandbox \"26b03295e92a287d6258ccb3472619662ce3ebe388d06065ea43429448d8b75f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7661c97e6fa0a77cded4afe64139eeafae40d2f32c6c957d5106202df66d6239\"" Dec 13 01:55:16.256843 containerd[2144]: time="2024-12-13T01:55:16.256781514Z" level=info msg="StartContainer for \"7661c97e6fa0a77cded4afe64139eeafae40d2f32c6c957d5106202df66d6239\"" Dec 13 01:55:16.354201 containerd[2144]: time="2024-12-13T01:55:16.353914819Z" level=info msg="StartContainer for \"7661c97e6fa0a77cded4afe64139eeafae40d2f32c6c957d5106202df66d6239\" returns successfully" Dec 13 01:55:16.454405 containerd[2144]: time="2024-12-13T01:55:16.454045699Z" level=info msg="shim disconnected" id=7661c97e6fa0a77cded4afe64139eeafae40d2f32c6c957d5106202df66d6239 namespace=k8s.io Dec 13 01:55:16.454405 containerd[2144]: time="2024-12-13T01:55:16.454157659Z" level=warning msg="cleaning up after shim disconnected" id=7661c97e6fa0a77cded4afe64139eeafae40d2f32c6c957d5106202df66d6239 namespace=k8s.io Dec 13 01:55:16.454405 containerd[2144]: time="2024-12-13T01:55:16.454206919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:16.489695 kubelet[2642]: E1213 01:55:16.489631 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:17.490120 kubelet[2642]: E1213 01:55:17.490071 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:17.550116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980517955.mount: Deactivated successfully. Dec 13 01:55:17.634148 kubelet[2642]: E1213 01:55:17.633611 2642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqj6z" podUID="c7cde905-d050-47e3-b9a7-34a19a0f3e58" Dec 13 01:55:18.090567 containerd[2144]: time="2024-12-13T01:55:18.090490327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.092598 containerd[2144]: time="2024-12-13T01:55:18.092513347Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:55:18.093887 containerd[2144]: time="2024-12-13T01:55:18.093796771Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.098875 containerd[2144]: time="2024-12-13T01:55:18.098759575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.100637 containerd[2144]: time="2024-12-13T01:55:18.100333603Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.879079121s" Dec 13 01:55:18.100637 containerd[2144]: time="2024-12-13T01:55:18.100437799Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:55:18.102329 containerd[2144]: time="2024-12-13T01:55:18.101993767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:55:18.105073 containerd[2144]: time="2024-12-13T01:55:18.104653543Z" level=info msg="CreateContainer within sandbox \"6f4d0c778c45d4e43889bd17573d50bab49ca762fbc28475b3d98fbd79957d9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:55:18.134220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831580553.mount: Deactivated successfully. Dec 13 01:55:18.140099 containerd[2144]: time="2024-12-13T01:55:18.140038112Z" level=info msg="CreateContainer within sandbox \"6f4d0c778c45d4e43889bd17573d50bab49ca762fbc28475b3d98fbd79957d9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd1fec63903cd86b995ec723f4919b6607cd016aec2b2034f145400f22cecab4\"" Dec 13 01:55:18.142609 containerd[2144]: time="2024-12-13T01:55:18.141260600Z" level=info msg="StartContainer for \"bd1fec63903cd86b995ec723f4919b6607cd016aec2b2034f145400f22cecab4\"" Dec 13 01:55:18.246160 containerd[2144]: time="2024-12-13T01:55:18.246061604Z" level=info msg="StartContainer for \"bd1fec63903cd86b995ec723f4919b6607cd016aec2b2034f145400f22cecab4\" returns successfully" Dec 13 01:55:18.491633 kubelet[2642]: E1213 01:55:18.491459 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:18.702193 kubelet[2642]: I1213 01:55:18.702074 2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hk2w8" podStartSLOduration=4.470007258 podStartE2EDuration="7.701988298s" podCreationTimestamp="2024-12-13 01:55:11 +0000 UTC" firstStartedPulling="2024-12-13 01:55:14.868857703 +0000 UTC m=+4.412068499" lastFinishedPulling="2024-12-13 01:55:18.100838743 +0000 UTC m=+7.644049539" observedRunningTime="2024-12-13 01:55:18.700930594 +0000 UTC m=+8.244141390" watchObservedRunningTime="2024-12-13 01:55:18.701988298 +0000 UTC m=+8.245199094" Dec 13 01:55:19.492609 kubelet[2642]: E1213 01:55:19.492537 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:19.634636 kubelet[2642]: E1213 01:55:19.633747 2642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqj6z" podUID="c7cde905-d050-47e3-b9a7-34a19a0f3e58" Dec 13 01:55:20.493608 kubelet[2642]: E1213 01:55:20.493555 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:21.395674 containerd[2144]: time="2024-12-13T01:55:21.395593848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:21.397797 containerd[2144]: time="2024-12-13T01:55:21.397732500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:55:21.398644 containerd[2144]: time="2024-12-13T01:55:21.398584632Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:21.404484 containerd[2144]: time="2024-12-13T01:55:21.404412480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.302325737s" Dec 13 01:55:21.404484 containerd[2144]: time="2024-12-13T01:55:21.404476944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:55:21.404854 containerd[2144]: time="2024-12-13T01:55:21.404613144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:21.408267 containerd[2144]: time="2024-12-13T01:55:21.408197220Z" level=info msg="CreateContainer within sandbox \"26b03295e92a287d6258ccb3472619662ce3ebe388d06065ea43429448d8b75f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:55:21.427112 containerd[2144]: time="2024-12-13T01:55:21.427042920Z" level=info msg="CreateContainer within sandbox \"26b03295e92a287d6258ccb3472619662ce3ebe388d06065ea43429448d8b75f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4531e3ee7e4200aee7ed4480171f47c88ed87d566dbdb2f8d3371820369347f7\"" Dec 13 01:55:21.428582 containerd[2144]: time="2024-12-13T01:55:21.427982976Z" level=info msg="StartContainer for \"4531e3ee7e4200aee7ed4480171f47c88ed87d566dbdb2f8d3371820369347f7\"" Dec 13 01:55:21.495071 kubelet[2642]: E1213 01:55:21.495000 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:21.526323 containerd[2144]: time="2024-12-13T01:55:21.526244712Z" level=info msg="StartContainer for \"4531e3ee7e4200aee7ed4480171f47c88ed87d566dbdb2f8d3371820369347f7\" returns successfully" Dec 13 01:55:21.636885 kubelet[2642]: E1213 01:55:21.634640 2642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqj6z" podUID="c7cde905-d050-47e3-b9a7-34a19a0f3e58" Dec 13 01:55:22.376139 containerd[2144]: time="2024-12-13T01:55:22.376066189Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:55:22.408658 kubelet[2642]: I1213 01:55:22.408448 2642 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:55:22.420321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4531e3ee7e4200aee7ed4480171f47c88ed87d566dbdb2f8d3371820369347f7-rootfs.mount: Deactivated successfully. Dec 13 01:55:22.496074 kubelet[2642]: E1213 01:55:22.495993 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:23.497165 kubelet[2642]: E1213 01:55:23.497103 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:23.637965 containerd[2144]: time="2024-12-13T01:55:23.637434999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqj6z,Uid:c7cde905-d050-47e3-b9a7-34a19a0f3e58,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:24.012966 containerd[2144]: time="2024-12-13T01:55:24.012886561Z" level=info msg="shim disconnected" id=4531e3ee7e4200aee7ed4480171f47c88ed87d566dbdb2f8d3371820369347f7 namespace=k8s.io Dec 13 01:55:24.012966 containerd[2144]: time="2024-12-13T01:55:24.012965641Z" level=warning msg="cleaning up after shim disconnected" id=4531e3ee7e4200aee7ed4480171f47c88ed87d566dbdb2f8d3371820369347f7 namespace=k8s.io Dec 13 01:55:24.015924 containerd[2144]: time="2024-12-13T01:55:24.012987661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:24.106588 containerd[2144]: time="2024-12-13T01:55:24.106523473Z" level=error msg="Failed to destroy network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:24.109404 containerd[2144]: time="2024-12-13T01:55:24.107422249Z" level=error msg="encountered an error cleaning up failed sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:24.109662 containerd[2144]: time="2024-12-13T01:55:24.109607125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqj6z,Uid:c7cde905-d050-47e3-b9a7-34a19a0f3e58,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:24.110125 kubelet[2642]: E1213 01:55:24.110073 2642 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:24.110277 kubelet[2642]: E1213 01:55:24.110170 2642 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hqj6z" Dec 13 01:55:24.110277 kubelet[2642]: E1213 01:55:24.110208 2642 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hqj6z" Dec 13 01:55:24.110460 kubelet[2642]: E1213 01:55:24.110302 2642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hqj6z_calico-system(c7cde905-d050-47e3-b9a7-34a19a0f3e58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hqj6z_calico-system(c7cde905-d050-47e3-b9a7-34a19a0f3e58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hqj6z" podUID="c7cde905-d050-47e3-b9a7-34a19a0f3e58" Dec 13 01:55:24.111557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de-shm.mount: Deactivated successfully. Dec 13 01:55:24.498017 kubelet[2642]: E1213 01:55:24.497302 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:24.707205 kubelet[2642]: I1213 01:55:24.706939 2642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:55:24.708257 containerd[2144]: time="2024-12-13T01:55:24.708191392Z" level=info msg="StopPodSandbox for \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\"" Dec 13 01:55:24.708860 containerd[2144]: time="2024-12-13T01:55:24.708496804Z" level=info msg="Ensure that sandbox 53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de in task-service has been cleanup successfully" Dec 13 01:55:24.714315 containerd[2144]: time="2024-12-13T01:55:24.714201292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:55:24.760570 containerd[2144]: time="2024-12-13T01:55:24.759837676Z" level=error msg="StopPodSandbox for \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\" failed" error="failed to destroy network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:24.760702 kubelet[2642]: E1213 01:55:24.760160 2642 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:55:24.760702 kubelet[2642]: E1213 01:55:24.760254 2642 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de"} Dec 13 01:55:24.760702 kubelet[2642]: E1213 01:55:24.760317 2642 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7cde905-d050-47e3-b9a7-34a19a0f3e58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:24.760702 kubelet[2642]: E1213 01:55:24.760370 2642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7cde905-d050-47e3-b9a7-34a19a0f3e58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hqj6z" podUID="c7cde905-d050-47e3-b9a7-34a19a0f3e58" Dec 13 01:55:25.498324 kubelet[2642]: E1213 01:55:25.498260 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:25.709477 kubelet[2642]: I1213 01:55:25.709429 2642 topology_manager.go:215] "Topology Admit Handler" podUID="3a5cb789-87bb-43be-bb84-dd3db54a81ed" podNamespace="default" podName="nginx-deployment-6d5f899847-pp7nv" Dec 13 01:55:25.824670 kubelet[2642]: I1213 01:55:25.824457 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2c6k\" (UniqueName: \"kubernetes.io/projected/3a5cb789-87bb-43be-bb84-dd3db54a81ed-kube-api-access-r2c6k\") pod \"nginx-deployment-6d5f899847-pp7nv\" (UID: \"3a5cb789-87bb-43be-bb84-dd3db54a81ed\") " pod="default/nginx-deployment-6d5f899847-pp7nv" Dec 13 01:55:26.017671 containerd[2144]: time="2024-12-13T01:55:26.016673271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pp7nv,Uid:3a5cb789-87bb-43be-bb84-dd3db54a81ed,Namespace:default,Attempt:0,}" Dec 13 01:55:26.224469 containerd[2144]: time="2024-12-13T01:55:26.223822252Z" level=error msg="Failed to destroy network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:26.228424 containerd[2144]: time="2024-12-13T01:55:26.225842476Z" level=error msg="encountered an error cleaning up failed sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:26.228424 containerd[2144]: time="2024-12-13T01:55:26.225934876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pp7nv,Uid:3a5cb789-87bb-43be-bb84-dd3db54a81ed,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:26.231343 kubelet[2642]: E1213 01:55:26.228919 2642 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:26.231343 kubelet[2642]: E1213 01:55:26.229004 2642 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-pp7nv" Dec 13 01:55:26.231343 kubelet[2642]: E1213 01:55:26.229045 2642 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-pp7nv" Dec 13 01:55:26.230869 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242-shm.mount: Deactivated successfully. Dec 13 01:55:26.232065 kubelet[2642]: E1213 01:55:26.229125 2642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-pp7nv_default(3a5cb789-87bb-43be-bb84-dd3db54a81ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-pp7nv_default(3a5cb789-87bb-43be-bb84-dd3db54a81ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-pp7nv" podUID="3a5cb789-87bb-43be-bb84-dd3db54a81ed" Dec 13 01:55:26.500120 kubelet[2642]: E1213 01:55:26.499481 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:26.719187 kubelet[2642]: I1213 01:55:26.719149 2642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:55:26.720900 containerd[2144]: time="2024-12-13T01:55:26.720839298Z" level=info msg="StopPodSandbox for \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\"" Dec 13 01:55:26.722492 containerd[2144]: time="2024-12-13T01:55:26.722416926Z" level=info msg="Ensure that sandbox 2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242 in task-service has been cleanup successfully" Dec 13 01:55:26.781505 containerd[2144]: time="2024-12-13T01:55:26.780789798Z" level=error msg="StopPodSandbox for \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\" failed" error="failed to destroy network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:26.782066 kubelet[2642]: E1213 01:55:26.781124 2642 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:55:26.782066 kubelet[2642]: E1213 01:55:26.781186 2642 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242"} Dec 13 01:55:26.782066 kubelet[2642]: E1213 01:55:26.781248 2642 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a5cb789-87bb-43be-bb84-dd3db54a81ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:26.782066 kubelet[2642]: E1213 01:55:26.781299 2642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a5cb789-87bb-43be-bb84-dd3db54a81ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-pp7nv" podUID="3a5cb789-87bb-43be-bb84-dd3db54a81ed" Dec 13 01:55:27.500319 kubelet[2642]: E1213 01:55:27.500228 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:28.501604 kubelet[2642]: E1213 01:55:28.501518 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:28.929895 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:55:29.502141 kubelet[2642]: E1213 01:55:29.502040 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:30.502938 kubelet[2642]: E1213 01:55:30.502885 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:30.625695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592563280.mount: Deactivated successfully. Dec 13 01:55:30.699710 containerd[2144]: time="2024-12-13T01:55:30.698345194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:30.701522 containerd[2144]: time="2024-12-13T01:55:30.701462734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:55:30.704604 containerd[2144]: time="2024-12-13T01:55:30.704533738Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:30.714210 containerd[2144]: time="2024-12-13T01:55:30.714138706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:30.715840 containerd[2144]: time="2024-12-13T01:55:30.715784518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.001521462s" Dec 13 01:55:30.716020 containerd[2144]: time="2024-12-13T01:55:30.715988362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:55:30.740019 containerd[2144]: time="2024-12-13T01:55:30.739966546Z" level=info msg="CreateContainer within sandbox \"26b03295e92a287d6258ccb3472619662ce3ebe388d06065ea43429448d8b75f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:55:30.780059 containerd[2144]: time="2024-12-13T01:55:30.779795326Z" level=info msg="CreateContainer within sandbox \"26b03295e92a287d6258ccb3472619662ce3ebe388d06065ea43429448d8b75f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c3e0c916f592bc74dec629e74b5e2a72220fc7900fbb04c8ffd41116b095a058\"" Dec 13 01:55:30.780805 containerd[2144]: time="2024-12-13T01:55:30.780749758Z" level=info msg="StartContainer for \"c3e0c916f592bc74dec629e74b5e2a72220fc7900fbb04c8ffd41116b095a058\"" Dec 13 01:55:30.883046 containerd[2144]: time="2024-12-13T01:55:30.882973079Z" level=info msg="StartContainer for \"c3e0c916f592bc74dec629e74b5e2a72220fc7900fbb04c8ffd41116b095a058\" returns successfully" Dec 13 01:55:30.987943 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:55:30.988081 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Dec 13 01:55:31.484998 kubelet[2642]: E1213 01:55:31.484941 2642 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:31.503083 kubelet[2642]: E1213 01:55:31.503038 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:31.773182 kubelet[2642]: I1213 01:55:31.773138 2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9wb5b" podStartSLOduration=4.920755028 podStartE2EDuration="20.773075555s" podCreationTimestamp="2024-12-13 01:55:11 +0000 UTC" firstStartedPulling="2024-12-13 01:55:14.864256879 +0000 UTC m=+4.407467675" lastFinishedPulling="2024-12-13 01:55:30.716577394 +0000 UTC m=+20.259788202" observedRunningTime="2024-12-13 01:55:31.771096647 +0000 UTC m=+21.314307455" watchObservedRunningTime="2024-12-13 01:55:31.773075555 +0000 UTC m=+21.316286363" Dec 13 01:55:32.503532 kubelet[2642]: E1213 01:55:32.503458 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:32.849454 kernel: bpftool[3424]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:55:33.144794 systemd-networkd[1696]: vxlan.calico: Link UP Dec 13 01:55:33.145040 (udev-worker)[3235]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:33.145189 systemd-networkd[1696]: vxlan.calico: Gained carrier Dec 13 01:55:33.188281 (udev-worker)[3236]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:33.504749 kubelet[2642]: E1213 01:55:33.504671 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:34.505523 kubelet[2642]: E1213 01:55:34.505463 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:34.605452 systemd-networkd[1696]: vxlan.calico: Gained IPv6LL Dec 13 01:55:35.505791 kubelet[2642]: E1213 01:55:35.505727 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:36.506183 kubelet[2642]: E1213 01:55:36.506110 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:37.440521 ntpd[2103]: Listen normally on 6 vxlan.calico 192.168.96.192:123 Dec 13 01:55:37.440653 ntpd[2103]: Listen normally on 7 vxlan.calico [fe80::6493:aeff:fec3:17d1%3]:123 Dec 13 01:55:37.441500 ntpd[2103]: 13 Dec 01:55:37 ntpd[2103]: Listen normally on 6 vxlan.calico 192.168.96.192:123 Dec 13 01:55:37.441500 ntpd[2103]: 13 Dec 01:55:37 ntpd[2103]: Listen normally on 7 vxlan.calico [fe80::6493:aeff:fec3:17d1%3]:123 Dec 13 01:55:37.506531 kubelet[2642]: E1213 01:55:37.506465 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:38.507360 kubelet[2642]: E1213 01:55:38.507297 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:38.633740 containerd[2144]: time="2024-12-13T01:55:38.633124613Z" level=info msg="StopPodSandbox for \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\"" Dec 13 01:55:38.633740 containerd[2144]: time="2024-12-13T01:55:38.633313901Z" level=info msg="StopPodSandbox for \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\"" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.725 [INFO][3528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.725 [INFO][3528] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" iface="eth0" netns="/var/run/netns/cni-1630e10c-dd7b-596e-3201-22fe354ccf59" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.726 [INFO][3528] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" iface="eth0" netns="/var/run/netns/cni-1630e10c-dd7b-596e-3201-22fe354ccf59" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.727 [INFO][3528] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" iface="eth0" netns="/var/run/netns/cni-1630e10c-dd7b-596e-3201-22fe354ccf59" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.727 [INFO][3528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.727 [INFO][3528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.778 [INFO][3544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" HandleID="k8s-pod-network.53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.778 [INFO][3544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.778 [INFO][3544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.792 [WARNING][3544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" HandleID="k8s-pod-network.53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.793 [INFO][3544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" HandleID="k8s-pod-network.53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.795 [INFO][3544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:38.802541 containerd[2144]: 2024-12-13 01:55:38.799 [INFO][3528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:55:38.804589 containerd[2144]: time="2024-12-13T01:55:38.804505782Z" level=info msg="TearDown network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\" successfully" Dec 13 01:55:38.807605 containerd[2144]: time="2024-12-13T01:55:38.804787806Z" level=info msg="StopPodSandbox for \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\" returns successfully" Dec 13 01:55:38.807605 containerd[2144]: time="2024-12-13T01:55:38.806656638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqj6z,Uid:c7cde905-d050-47e3-b9a7-34a19a0f3e58,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:38.809863 systemd[1]: run-netns-cni\x2d1630e10c\x2ddd7b\x2d596e\x2d3201\x2d22fe354ccf59.mount: Deactivated successfully. Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.721 [INFO][3527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.721 [INFO][3527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" iface="eth0" netns="/var/run/netns/cni-a34ecbea-e95f-84a9-26ea-0c4f73b8d1db" Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.721 [INFO][3527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" iface="eth0" netns="/var/run/netns/cni-a34ecbea-e95f-84a9-26ea-0c4f73b8d1db" Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.722 [INFO][3527] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" iface="eth0" netns="/var/run/netns/cni-a34ecbea-e95f-84a9-26ea-0c4f73b8d1db" Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.722 [INFO][3527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.722 [INFO][3527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.777 [INFO][3540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" HandleID="k8s-pod-network.2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.779 [INFO][3540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.795 [INFO][3540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.812 [WARNING][3540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" HandleID="k8s-pod-network.2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.812 [INFO][3540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" HandleID="k8s-pod-network.2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.814 [INFO][3540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:38.820405 containerd[2144]: 2024-12-13 01:55:38.817 [INFO][3527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:55:38.820405 containerd[2144]: time="2024-12-13T01:55:38.820282734Z" level=info msg="TearDown network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\" successfully" Dec 13 01:55:38.820405 containerd[2144]: time="2024-12-13T01:55:38.820322922Z" level=info msg="StopPodSandbox for \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\" returns successfully" Dec 13 01:55:38.823242 containerd[2144]: time="2024-12-13T01:55:38.822865662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pp7nv,Uid:3a5cb789-87bb-43be-bb84-dd3db54a81ed,Namespace:default,Attempt:1,}" Dec 13 01:55:38.825607 systemd[1]: run-netns-cni\x2da34ecbea\x2de95f\x2d84a9\x2d26ea\x2d0c4f73b8d1db.mount: Deactivated successfully. Dec 13 01:55:39.064530 systemd-networkd[1696]: cali81af34b576d: Link UP Dec 13 01:55:39.064918 systemd-networkd[1696]: cali81af34b576d: Gained carrier Dec 13 01:55:39.071348 (udev-worker)[3590]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:38.931 [INFO][3554] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.238-k8s-csi--node--driver--hqj6z-eth0 csi-node-driver- calico-system c7cde905-d050-47e3-b9a7-34a19a0f3e58 1014 0 2024-12-13 01:55:11 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.28.238 csi-node-driver-hqj6z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali81af34b576d [] []}} ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Namespace="calico-system" Pod="csi-node-driver-hqj6z" WorkloadEndpoint="172.31.28.238-k8s-csi--node--driver--hqj6z-" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:38.931 [INFO][3554] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Namespace="calico-system" Pod="csi-node-driver-hqj6z" WorkloadEndpoint="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:38.994 [INFO][3577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" HandleID="k8s-pod-network.629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.013 [INFO][3577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" HandleID="k8s-pod-network.629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011d6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.28.238", "pod":"csi-node-driver-hqj6z", "timestamp":"2024-12-13 01:55:38.994257127 +0000 UTC"}, Hostname:"172.31.28.238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.013 [INFO][3577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.013 [INFO][3577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.013 [INFO][3577] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.238' Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.016 [INFO][3577] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" host="172.31.28.238" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.025 [INFO][3577] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.238" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.032 [INFO][3577] ipam/ipam.go 489: Trying affinity for 192.168.96.192/26 host="172.31.28.238" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.034 [INFO][3577] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.192/26 host="172.31.28.238" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.038 [INFO][3577] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172.31.28.238" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.038 [INFO][3577] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" host="172.31.28.238" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.040 [INFO][3577] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5 Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.046 [INFO][3577] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" host="172.31.28.238" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.055 [INFO][3577] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.193/26] block=192.168.96.192/26 handle="k8s-pod-network.629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" host="172.31.28.238" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.056 [INFO][3577] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.193/26] handle="k8s-pod-network.629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" host="172.31.28.238" Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.056 [INFO][3577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:39.093629 containerd[2144]: 2024-12-13 01:55:39.056 [INFO][3577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.193/26] IPv6=[] ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" HandleID="k8s-pod-network.629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:39.096439 containerd[2144]: 2024-12-13 01:55:39.059 [INFO][3554] cni-plugin/k8s.go 386: Populated endpoint ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Namespace="calico-system" Pod="csi-node-driver-hqj6z" WorkloadEndpoint="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-csi--node--driver--hqj6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7cde905-d050-47e3-b9a7-34a19a0f3e58", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"", Pod:"csi-node-driver-hqj6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali81af34b576d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:39.096439 containerd[2144]: 2024-12-13 01:55:39.059 [INFO][3554] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.193/32] ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Namespace="calico-system" Pod="csi-node-driver-hqj6z" WorkloadEndpoint="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:39.096439 containerd[2144]: 2024-12-13 01:55:39.059 [INFO][3554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81af34b576d ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Namespace="calico-system" Pod="csi-node-driver-hqj6z" WorkloadEndpoint="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:39.096439 containerd[2144]: 2024-12-13 01:55:39.063 [INFO][3554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Namespace="calico-system" Pod="csi-node-driver-hqj6z" WorkloadEndpoint="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:39.096439 containerd[2144]: 2024-12-13 01:55:39.064 [INFO][3554] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Namespace="calico-system" Pod="csi-node-driver-hqj6z" WorkloadEndpoint="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-csi--node--driver--hqj6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7cde905-d050-47e3-b9a7-34a19a0f3e58", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5", Pod:"csi-node-driver-hqj6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali81af34b576d", MAC:"f2:b8:c1:d7:f3:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:39.096439 containerd[2144]: 2024-12-13 01:55:39.088 [INFO][3554] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5" Namespace="calico-system" Pod="csi-node-driver-hqj6z" WorkloadEndpoint="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:55:39.141558 containerd[2144]: time="2024-12-13T01:55:39.141298756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:39.141558 containerd[2144]: time="2024-12-13T01:55:39.141498928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:39.141907 containerd[2144]: time="2024-12-13T01:55:39.141578584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.141907 containerd[2144]: time="2024-12-13T01:55:39.141773932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.157914 systemd-networkd[1696]: cali16aeb5de558: Link UP Dec 13 01:55:39.158311 systemd-networkd[1696]: cali16aeb5de558: Gained carrier Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:38.944 [INFO][3563] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0 nginx-deployment-6d5f899847- default 3a5cb789-87bb-43be-bb84-dd3db54a81ed 1013 0 2024-12-13 01:55:25 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.28.238 nginx-deployment-6d5f899847-pp7nv eth0 default [] [] [kns.default ksa.default.default] cali16aeb5de558 [] []}} ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Namespace="default" Pod="nginx-deployment-6d5f899847-pp7nv" WorkloadEndpoint="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:38.945 [INFO][3563] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Namespace="default" Pod="nginx-deployment-6d5f899847-pp7nv" WorkloadEndpoint="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:38.997 [INFO][3581] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" HandleID="k8s-pod-network.537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.019 [INFO][3581] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" HandleID="k8s-pod-network.537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d2d20), Attrs:map[string]string{"namespace":"default", "node":"172.31.28.238", "pod":"nginx-deployment-6d5f899847-pp7nv", "timestamp":"2024-12-13 01:55:38.997368943 +0000 UTC"}, Hostname:"172.31.28.238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.020 [INFO][3581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.056 [INFO][3581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.056 [INFO][3581] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.238' Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.059 [INFO][3581] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" host="172.31.28.238" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.073 [INFO][3581] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.238" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.089 [INFO][3581] ipam/ipam.go 489: Trying affinity for 192.168.96.192/26 host="172.31.28.238" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.106 [INFO][3581] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.192/26 host="172.31.28.238" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.112 [INFO][3581] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172.31.28.238" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.112 [INFO][3581] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" host="172.31.28.238" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.118 [INFO][3581] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.132 [INFO][3581] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" host="172.31.28.238" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.143 [INFO][3581] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.194/26] block=192.168.96.192/26 handle="k8s-pod-network.537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" host="172.31.28.238" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.143 [INFO][3581] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.194/26] handle="k8s-pod-network.537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" host="172.31.28.238" Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.143 [INFO][3581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:39.195280 containerd[2144]: 2024-12-13 01:55:39.143 [INFO][3581] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.194/26] IPv6=[] ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" HandleID="k8s-pod-network.537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:39.197695 containerd[2144]: 2024-12-13 01:55:39.149 [INFO][3563] cni-plugin/k8s.go 386: Populated endpoint ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Namespace="default" Pod="nginx-deployment-6d5f899847-pp7nv" WorkloadEndpoint="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"3a5cb789-87bb-43be-bb84-dd3db54a81ed", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 25, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"", Pod:"nginx-deployment-6d5f899847-pp7nv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali16aeb5de558", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:39.197695 containerd[2144]: 2024-12-13 01:55:39.150 [INFO][3563] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.194/32] ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Namespace="default" Pod="nginx-deployment-6d5f899847-pp7nv" WorkloadEndpoint="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:39.197695 containerd[2144]: 2024-12-13 01:55:39.150 [INFO][3563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16aeb5de558 ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Namespace="default" Pod="nginx-deployment-6d5f899847-pp7nv" WorkloadEndpoint="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:39.197695 containerd[2144]: 2024-12-13 01:55:39.160 [INFO][3563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Namespace="default" Pod="nginx-deployment-6d5f899847-pp7nv" WorkloadEndpoint="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:39.197695 containerd[2144]: 2024-12-13 01:55:39.162 [INFO][3563] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Namespace="default" Pod="nginx-deployment-6d5f899847-pp7nv" WorkloadEndpoint="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"3a5cb789-87bb-43be-bb84-dd3db54a81ed", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 25, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae", Pod:"nginx-deployment-6d5f899847-pp7nv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali16aeb5de558", MAC:"fe:ea:4b:77:08:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:39.197695 containerd[2144]: 2024-12-13 01:55:39.175 [INFO][3563] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae" Namespace="default" Pod="nginx-deployment-6d5f899847-pp7nv" WorkloadEndpoint="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:55:39.248710 containerd[2144]: time="2024-12-13T01:55:39.248557660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqj6z,Uid:c7cde905-d050-47e3-b9a7-34a19a0f3e58,Namespace:calico-system,Attempt:1,} returns sandbox id \"629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5\"" Dec 13 01:55:39.253953 containerd[2144]: time="2024-12-13T01:55:39.252986212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:55:39.261439 containerd[2144]: time="2024-12-13T01:55:39.260889748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:39.261439 containerd[2144]: time="2024-12-13T01:55:39.260985076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:39.261439 containerd[2144]: time="2024-12-13T01:55:39.261059560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.262501 containerd[2144]: time="2024-12-13T01:55:39.261854392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:39.341753 containerd[2144]: time="2024-12-13T01:55:39.341550029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pp7nv,Uid:3a5cb789-87bb-43be-bb84-dd3db54a81ed,Namespace:default,Attempt:1,} returns sandbox id \"537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae\"" Dec 13 01:55:39.507876 kubelet[2642]: E1213 01:55:39.507806 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:40.430076 systemd-networkd[1696]: cali81af34b576d: Gained IPv6LL Dec 13 01:55:40.478454 containerd[2144]: time="2024-12-13T01:55:40.478102879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:40.480471 containerd[2144]: time="2024-12-13T01:55:40.480357871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:55:40.483443 containerd[2144]: time="2024-12-13T01:55:40.483359707Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:40.487776 containerd[2144]: time="2024-12-13T01:55:40.487694239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:40.489220 containerd[2144]: time="2024-12-13T01:55:40.489158875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.236111979s" Dec 13 01:55:40.489331 containerd[2144]: time="2024-12-13T01:55:40.489218251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:55:40.491310 containerd[2144]: time="2024-12-13T01:55:40.491273023Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:55:40.493112 containerd[2144]: time="2024-12-13T01:55:40.493040467Z" level=info msg="CreateContainer within sandbox \"629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:55:40.508169 kubelet[2642]: E1213 01:55:40.508118 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:40.525324 containerd[2144]: time="2024-12-13T01:55:40.525126835Z" level=info msg="CreateContainer within sandbox \"629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6c4174b5dc3105b5617445d10d73a74206ad7286bd869a05b7dea907cb2a3112\"" Dec 13 01:55:40.526399 containerd[2144]: time="2024-12-13T01:55:40.526339819Z" level=info msg="StartContainer for \"6c4174b5dc3105b5617445d10d73a74206ad7286bd869a05b7dea907cb2a3112\"" Dec 13 01:55:40.631997 containerd[2144]: time="2024-12-13T01:55:40.631922155Z" level=info msg="StartContainer for \"6c4174b5dc3105b5617445d10d73a74206ad7286bd869a05b7dea907cb2a3112\" returns successfully" Dec 13 01:55:41.004809 systemd-networkd[1696]: cali16aeb5de558: Gained IPv6LL Dec 13 01:55:41.509191 kubelet[2642]: E1213 01:55:41.509065 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:42.510237 kubelet[2642]: E1213 01:55:42.510152 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:42.915616 update_engine[2120]: I20241213 01:55:42.915439 2120 update_attempter.cc:509] Updating boot flags... Dec 13 01:55:43.035623 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3756) Dec 13 01:55:43.441005 ntpd[2103]: Listen normally on 8 cali81af34b576d [fe80::ecee:eeff:feee:eeee%6]:123 Dec 13 01:55:43.441148 ntpd[2103]: Listen normally on 9 cali16aeb5de558 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:43.441733 ntpd[2103]: 13 Dec 01:55:43 ntpd[2103]: Listen normally on 8 cali81af34b576d [fe80::ecee:eeff:feee:eeee%6]:123 Dec 13 01:55:43.441733 ntpd[2103]: 13 Dec 01:55:43 ntpd[2103]: Listen normally on 9 cali16aeb5de558 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:43.460226 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3759) Dec 13 01:55:43.510825 kubelet[2642]: E1213 01:55:43.510765 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:44.048227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352391084.mount: Deactivated successfully. Dec 13 01:55:44.512067 kubelet[2642]: E1213 01:55:44.511931 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:45.513342 kubelet[2642]: E1213 01:55:45.513287 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:45.593150 containerd[2144]: time="2024-12-13T01:55:45.593062680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:45.595182 containerd[2144]: time="2024-12-13T01:55:45.595097460Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67696939" Dec 13 01:55:45.597698 containerd[2144]: time="2024-12-13T01:55:45.597612972Z" level=info msg="ImageCreate event name:\"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:45.610783 containerd[2144]: time="2024-12-13T01:55:45.610696188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:45.612307 containerd[2144]: time="2024-12-13T01:55:45.612106200Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 5.120623021s" Dec 13 01:55:45.612307 containerd[2144]: time="2024-12-13T01:55:45.612173700Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:55:45.614348 containerd[2144]: time="2024-12-13T01:55:45.614037456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:55:45.616019 containerd[2144]: time="2024-12-13T01:55:45.615940284Z" level=info msg="CreateContainer within sandbox \"537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:55:45.648791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount944786020.mount: Deactivated successfully. Dec 13 01:55:45.654611 containerd[2144]: time="2024-12-13T01:55:45.654547236Z" level=info msg="CreateContainer within sandbox \"537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"885442c49ab238b97f23d570d7f06837c9287624f79057f20335a538a7806dbe\"" Dec 13 01:55:45.655769 containerd[2144]: time="2024-12-13T01:55:45.655706220Z" level=info msg="StartContainer for \"885442c49ab238b97f23d570d7f06837c9287624f79057f20335a538a7806dbe\"" Dec 13 01:55:45.761542 containerd[2144]: time="2024-12-13T01:55:45.761468281Z" level=info msg="StartContainer for \"885442c49ab238b97f23d570d7f06837c9287624f79057f20335a538a7806dbe\" returns successfully" Dec 13 01:55:46.113264 kubelet[2642]: I1213 01:55:46.113183 2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-pp7nv" podStartSLOduration=14.845808775 podStartE2EDuration="21.113116762s" podCreationTimestamp="2024-12-13 01:55:25 +0000 UTC" firstStartedPulling="2024-12-13 01:55:39.345461381 +0000 UTC m=+28.888672177" lastFinishedPulling="2024-12-13 01:55:45.612769368 +0000 UTC m=+35.155980164" observedRunningTime="2024-12-13 01:55:45.812476033 +0000 UTC m=+35.355686853" watchObservedRunningTime="2024-12-13 01:55:46.113116762 +0000 UTC m=+35.656327582" Dec 13 01:55:46.514273 kubelet[2642]: E1213 01:55:46.514214 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:47.004139 containerd[2144]: time="2024-12-13T01:55:47.004079147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:47.006119 containerd[2144]: time="2024-12-13T01:55:47.006057383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:55:47.008268 containerd[2144]: time="2024-12-13T01:55:47.008179379Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:47.014545 containerd[2144]: time="2024-12-13T01:55:47.014482427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:47.016342 containerd[2144]: time="2024-12-13T01:55:47.016164971Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.402064779s" Dec 13 01:55:47.016342 containerd[2144]: time="2024-12-13T01:55:47.016223087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:55:47.019526 containerd[2144]: time="2024-12-13T01:55:47.019286015Z" level=info msg="CreateContainer within sandbox \"629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:55:47.045880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504691986.mount: Deactivated successfully. Dec 13 01:55:47.055877 containerd[2144]: time="2024-12-13T01:55:47.054039947Z" level=info msg="CreateContainer within sandbox \"629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5b82803abcc43eaf94bfec1ce95caffd8ac89dd63113f593c13cae57228a671d\"" Dec 13 01:55:47.057538 containerd[2144]: time="2024-12-13T01:55:47.056445659Z" level=info msg="StartContainer for \"5b82803abcc43eaf94bfec1ce95caffd8ac89dd63113f593c13cae57228a671d\"" Dec 13 01:55:47.166418 containerd[2144]: time="2024-12-13T01:55:47.166270464Z" level=info msg="StartContainer for \"5b82803abcc43eaf94bfec1ce95caffd8ac89dd63113f593c13cae57228a671d\" returns successfully" Dec 13 01:55:47.514780 kubelet[2642]: E1213 01:55:47.514689 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:47.627528 kubelet[2642]: I1213 01:55:47.627471 2642 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:55:47.627528 kubelet[2642]: I1213 01:55:47.627533 2642 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:55:48.515637 kubelet[2642]: E1213 01:55:48.515567 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:49.516778 kubelet[2642]: E1213 01:55:49.516712 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:50.517493 kubelet[2642]: E1213 01:55:50.517430 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:51.485255 kubelet[2642]: E1213 01:55:51.485189 2642 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:51.517631 kubelet[2642]: E1213 01:55:51.517593 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:52.518464 kubelet[2642]: E1213 01:55:52.518415 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:53.519494 kubelet[2642]: E1213 01:55:53.519423 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:54.035895 kubelet[2642]: I1213 01:55:54.035757 2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hqj6z" podStartSLOduration=35.271042091 podStartE2EDuration="43.035697606s" podCreationTimestamp="2024-12-13 01:55:11 +0000 UTC" firstStartedPulling="2024-12-13 01:55:39.25210372 +0000 UTC m=+28.795314504" lastFinishedPulling="2024-12-13 01:55:47.016759211 +0000 UTC m=+36.559970019" observedRunningTime="2024-12-13 01:55:47.825857451 +0000 UTC m=+37.369068283" watchObservedRunningTime="2024-12-13 01:55:54.035697606 +0000 UTC m=+43.578908414" Dec 13 01:55:54.036263 kubelet[2642]: I1213 01:55:54.036231 2642 topology_manager.go:215] "Topology Admit Handler" podUID="f3deaeca-8df5-4bb8-93a8-dbb231013c77" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 01:55:54.104278 kubelet[2642]: I1213 01:55:54.104107 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xl4\" (UniqueName: \"kubernetes.io/projected/f3deaeca-8df5-4bb8-93a8-dbb231013c77-kube-api-access-d4xl4\") pod \"nfs-server-provisioner-0\" (UID: \"f3deaeca-8df5-4bb8-93a8-dbb231013c77\") " pod="default/nfs-server-provisioner-0" Dec 13 01:55:54.104278 kubelet[2642]: I1213 01:55:54.104177 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f3deaeca-8df5-4bb8-93a8-dbb231013c77-data\") pod \"nfs-server-provisioner-0\" (UID: \"f3deaeca-8df5-4bb8-93a8-dbb231013c77\") " pod="default/nfs-server-provisioner-0" Dec 13 01:55:54.342308 containerd[2144]: time="2024-12-13T01:55:54.341744779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f3deaeca-8df5-4bb8-93a8-dbb231013c77,Namespace:default,Attempt:0,}" Dec 13 01:55:54.520159 kubelet[2642]: E1213 01:55:54.520091 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:54.569507 systemd-networkd[1696]: cali60e51b789ff: Link UP Dec 13 01:55:54.569919 systemd-networkd[1696]: cali60e51b789ff: Gained carrier Dec 13 01:55:54.574336 (udev-worker)[4102]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.441 [INFO][4085] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.238-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default f3deaeca-8df5-4bb8-93a8-dbb231013c77 1102 0 2024-12-13 01:55:54 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.28.238 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.238-k8s-nfs--server--provisioner--0-" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.442 [INFO][4085] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.492 [INFO][4095] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" HandleID="k8s-pod-network.d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Workload="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.515 [INFO][4095] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" HandleID="k8s-pod-network.d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Workload="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d500), Attrs:map[string]string{"namespace":"default", "node":"172.31.28.238", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 01:55:54.4928933 +0000 UTC"}, Hostname:"172.31.28.238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.515 [INFO][4095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.515 [INFO][4095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.515 [INFO][4095] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.238' Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.518 [INFO][4095] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" host="172.31.28.238" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.524 [INFO][4095] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.238" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.530 [INFO][4095] ipam/ipam.go 489: Trying affinity for 192.168.96.192/26 host="172.31.28.238" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.533 [INFO][4095] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.192/26 host="172.31.28.238" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.536 [INFO][4095] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172.31.28.238" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.536 [INFO][4095] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" host="172.31.28.238" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.539 [INFO][4095] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.549 [INFO][4095] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" host="172.31.28.238" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.561 [INFO][4095] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.195/26] block=192.168.96.192/26 handle="k8s-pod-network.d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" host="172.31.28.238" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.561 [INFO][4095] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.195/26] handle="k8s-pod-network.d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" host="172.31.28.238" Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.561 [INFO][4095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:54.599433 containerd[2144]: 2024-12-13 01:55:54.561 [INFO][4095] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.195/26] IPv6=[] ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" HandleID="k8s-pod-network.d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Workload="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:55:54.602225 containerd[2144]: 2024-12-13 01:55:54.564 [INFO][4085] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f3deaeca-8df5-4bb8-93a8-dbb231013c77", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:54.602225 containerd[2144]: 2024-12-13 01:55:54.564 [INFO][4085] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.195/32] ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:55:54.602225 containerd[2144]: 2024-12-13 01:55:54.564 [INFO][4085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:55:54.602225 containerd[2144]: 2024-12-13 01:55:54.571 [INFO][4085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:55:54.602873 containerd[2144]: 2024-12-13 01:55:54.572 [INFO][4085] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f3deaeca-8df5-4bb8-93a8-dbb231013c77", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"1e:6f:8e:7c:f0:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:54.602873 containerd[2144]: 2024-12-13 01:55:54.596 [INFO][4085] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.28.238-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:55:54.643095 containerd[2144]: time="2024-12-13T01:55:54.641320197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:54.643537 containerd[2144]: time="2024-12-13T01:55:54.643139205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:54.643537 containerd[2144]: time="2024-12-13T01:55:54.643216197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:54.644001 containerd[2144]: time="2024-12-13T01:55:54.643529517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:54.733636 containerd[2144]: time="2024-12-13T01:55:54.733564209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f3deaeca-8df5-4bb8-93a8-dbb231013c77,Namespace:default,Attempt:0,} returns sandbox id \"d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c\"" Dec 13 01:55:54.736485 containerd[2144]: time="2024-12-13T01:55:54.736410237Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:55:55.521576 kubelet[2642]: E1213 01:55:55.521504 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:56.522644 kubelet[2642]: E1213 01:55:56.522600 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:56.556652 systemd-networkd[1696]: cali60e51b789ff: Gained IPv6LL Dec 13 01:55:57.326232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823249866.mount: Deactivated successfully. Dec 13 01:55:57.524170 kubelet[2642]: E1213 01:55:57.523824 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:58.524013 kubelet[2642]: E1213 01:55:58.523954 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:59.440457 ntpd[2103]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:55:59.442005 ntpd[2103]: 13 Dec 01:55:59 ntpd[2103]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:55:59.525155 kubelet[2642]: E1213 01:55:59.525074 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:00.283250 containerd[2144]: time="2024-12-13T01:56:00.283190269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:00.287106 containerd[2144]: time="2024-12-13T01:56:00.287003785Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Dec 13 01:56:00.289419 containerd[2144]: time="2024-12-13T01:56:00.289291801Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:00.303417 containerd[2144]: time="2024-12-13T01:56:00.302985409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:00.306853 containerd[2144]: time="2024-12-13T01:56:00.306787945Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.570311168s" Dec 13 01:56:00.308615 containerd[2144]: time="2024-12-13T01:56:00.307101373Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 01:56:00.314959 containerd[2144]: time="2024-12-13T01:56:00.314891245Z" level=info msg="CreateContainer within sandbox \"d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:56:00.346463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435964471.mount: Deactivated successfully. Dec 13 01:56:00.353716 containerd[2144]: time="2024-12-13T01:56:00.353648713Z" level=info msg="CreateContainer within sandbox \"d90551ffdcc41c31cd7803b3a5fcb163749b7a3aadd72f218b3c0508d31c493c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"15a6c1aaff34ed67dc059ec0f734035db05ac26b631b6dfbc3cef3fc6a49e49a\"" Dec 13 01:56:00.354851 containerd[2144]: time="2024-12-13T01:56:00.354716305Z" level=info msg="StartContainer for \"15a6c1aaff34ed67dc059ec0f734035db05ac26b631b6dfbc3cef3fc6a49e49a\"" Dec 13 01:56:00.460409 containerd[2144]: time="2024-12-13T01:56:00.460321934Z" level=info msg="StartContainer for \"15a6c1aaff34ed67dc059ec0f734035db05ac26b631b6dfbc3cef3fc6a49e49a\" returns successfully" Dec 13 01:56:00.526417 kubelet[2642]: E1213 01:56:00.525594 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:01.526547 kubelet[2642]: E1213 01:56:01.526479 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:02.526750 kubelet[2642]: E1213 01:56:02.526682 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:03.527192 kubelet[2642]: E1213 01:56:03.527113 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:04.527436 kubelet[2642]: E1213 01:56:04.527327 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:05.527564 kubelet[2642]: E1213 01:56:05.527494 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:06.527712 kubelet[2642]: E1213 01:56:06.527652 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:07.528282 kubelet[2642]: E1213 01:56:07.528220 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:08.528877 kubelet[2642]: E1213 01:56:08.528809 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:09.529967 kubelet[2642]: E1213 01:56:09.529906 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:10.530623 kubelet[2642]: E1213 01:56:10.530562 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:11.485465 kubelet[2642]: E1213 01:56:11.485412 2642 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:11.527823 containerd[2144]: time="2024-12-13T01:56:11.527687605Z" level=info msg="StopPodSandbox for \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\"" Dec 13 01:56:11.530805 kubelet[2642]: E1213 01:56:11.530688 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.592 [WARNING][4275] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-csi--node--driver--hqj6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7cde905-d050-47e3-b9a7-34a19a0f3e58", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5", Pod:"csi-node-driver-hqj6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali81af34b576d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.592 [INFO][4275] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.592 [INFO][4275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" iface="eth0" netns="" Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.592 [INFO][4275] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.593 [INFO][4275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.627 [INFO][4282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" HandleID="k8s-pod-network.53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.628 [INFO][4282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.628 [INFO][4282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.642 [WARNING][4282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" HandleID="k8s-pod-network.53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.642 [INFO][4282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" HandleID="k8s-pod-network.53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.645 [INFO][4282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:11.651661 containerd[2144]: 2024-12-13 01:56:11.648 [INFO][4275] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:56:11.653359 containerd[2144]: time="2024-12-13T01:56:11.651972769Z" level=info msg="TearDown network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\" successfully" Dec 13 01:56:11.653359 containerd[2144]: time="2024-12-13T01:56:11.652011193Z" level=info msg="StopPodSandbox for \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\" returns successfully" Dec 13 01:56:11.654072 containerd[2144]: time="2024-12-13T01:56:11.653835505Z" level=info msg="RemovePodSandbox for \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\"" Dec 13 01:56:11.654072 containerd[2144]: time="2024-12-13T01:56:11.653886589Z" level=info msg="Forcibly stopping sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\"" Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.716 [WARNING][4302] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-csi--node--driver--hqj6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7cde905-d050-47e3-b9a7-34a19a0f3e58", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"629e10d9bb63ee6ed8b96909f7f9bae40e915eb9705c4df89ee5398946c5fbd5", Pod:"csi-node-driver-hqj6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali81af34b576d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.717 [INFO][4302] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.717 [INFO][4302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" iface="eth0" netns="" Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.717 [INFO][4302] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.717 [INFO][4302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.751 [INFO][4308] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" HandleID="k8s-pod-network.53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.751 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.751 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.765 [WARNING][4308] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" HandleID="k8s-pod-network.53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.765 [INFO][4308] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" HandleID="k8s-pod-network.53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Workload="172.31.28.238-k8s-csi--node--driver--hqj6z-eth0" Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.767 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:11.772502 containerd[2144]: 2024-12-13 01:56:11.770 [INFO][4302] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de" Dec 13 01:56:11.772502 containerd[2144]: time="2024-12-13T01:56:11.772368398Z" level=info msg="TearDown network for sandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\" successfully" Dec 13 01:56:11.778072 containerd[2144]: time="2024-12-13T01:56:11.777973670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:11.778072 containerd[2144]: time="2024-12-13T01:56:11.778061846Z" level=info msg="RemovePodSandbox \"53b0b069bbd903274d4d41fefde2e17276319b38dbd1d8dbc478381ba8d384de\" returns successfully" Dec 13 01:56:11.779189 containerd[2144]: time="2024-12-13T01:56:11.778770302Z" level=info msg="StopPodSandbox for \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\"" Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.842 [WARNING][4326] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"3a5cb789-87bb-43be-bb84-dd3db54a81ed", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 25, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae", Pod:"nginx-deployment-6d5f899847-pp7nv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali16aeb5de558", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.842 [INFO][4326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.842 [INFO][4326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" iface="eth0" netns="" Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.842 [INFO][4326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.842 [INFO][4326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.894 [INFO][4332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" HandleID="k8s-pod-network.2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.894 [INFO][4332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.894 [INFO][4332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.910 [WARNING][4332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" HandleID="k8s-pod-network.2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.910 [INFO][4332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" HandleID="k8s-pod-network.2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.913 [INFO][4332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:11.917596 containerd[2144]: 2024-12-13 01:56:11.915 [INFO][4326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:56:11.918928 containerd[2144]: time="2024-12-13T01:56:11.918479799Z" level=info msg="TearDown network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\" successfully" Dec 13 01:56:11.918928 containerd[2144]: time="2024-12-13T01:56:11.918520179Z" level=info msg="StopPodSandbox for \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\" returns successfully" Dec 13 01:56:11.919360 containerd[2144]: time="2024-12-13T01:56:11.919307919Z" level=info msg="RemovePodSandbox for \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\"" Dec 13 01:56:11.919517 containerd[2144]: time="2024-12-13T01:56:11.919363371Z" level=info msg="Forcibly stopping sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\"" Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:11.983 [WARNING][4350] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"3a5cb789-87bb-43be-bb84-dd3db54a81ed", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 25, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"537d24ce3adcf5575b71ea903a70c07cd6821ee9be25ad0fbe156c9772f8feae", Pod:"nginx-deployment-6d5f899847-pp7nv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali16aeb5de558", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:11.983 [INFO][4350] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:11.983 [INFO][4350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" iface="eth0" netns="" Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:11.983 [INFO][4350] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:11.983 [INFO][4350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:12.020 [INFO][4357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" HandleID="k8s-pod-network.2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:12.020 [INFO][4357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:12.021 [INFO][4357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:12.033 [WARNING][4357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" HandleID="k8s-pod-network.2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:12.033 [INFO][4357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" HandleID="k8s-pod-network.2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Workload="172.31.28.238-k8s-nginx--deployment--6d5f899847--pp7nv-eth0" Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:12.035 [INFO][4357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:12.039732 containerd[2144]: 2024-12-13 01:56:12.037 [INFO][4350] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242" Dec 13 01:56:12.040548 containerd[2144]: time="2024-12-13T01:56:12.039729647Z" level=info msg="TearDown network for sandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\" successfully" Dec 13 01:56:12.045173 containerd[2144]: time="2024-12-13T01:56:12.045107051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:12.046010 containerd[2144]: time="2024-12-13T01:56:12.045187559Z" level=info msg="RemovePodSandbox \"2c707e89c6c5e0d77eb32072857632fe530bc2c01cf9d1a09c3cb395f51a2242\" returns successfully" Dec 13 01:56:12.531651 kubelet[2642]: E1213 01:56:12.531592 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:13.532014 kubelet[2642]: E1213 01:56:13.531950 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:14.532514 kubelet[2642]: E1213 01:56:14.532449 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:15.533188 kubelet[2642]: E1213 01:56:15.533128 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:16.534106 kubelet[2642]: E1213 01:56:16.534041 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:17.534827 kubelet[2642]: E1213 01:56:17.534752 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:18.535868 kubelet[2642]: E1213 01:56:18.535805 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:19.536884 kubelet[2642]: E1213 01:56:19.536824 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:20.537713 kubelet[2642]: E1213 01:56:20.537650 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:21.538613 kubelet[2642]: E1213 01:56:21.538549 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:22.539600 kubelet[2642]: E1213 01:56:22.539542 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:23.539897 kubelet[2642]: E1213 01:56:23.539836 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:24.540844 kubelet[2642]: E1213 01:56:24.540779 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:25.421528 kubelet[2642]: I1213 01:56:25.421445 2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=25.849707842 podStartE2EDuration="31.421318418s" podCreationTimestamp="2024-12-13 01:55:54 +0000 UTC" firstStartedPulling="2024-12-13 01:55:54.736059813 +0000 UTC m=+44.279270597" lastFinishedPulling="2024-12-13 01:56:00.307670377 +0000 UTC m=+49.850881173" observedRunningTime="2024-12-13 01:56:00.869674108 +0000 UTC m=+50.412884928" watchObservedRunningTime="2024-12-13 01:56:25.421318418 +0000 UTC m=+74.964529226" Dec 13 01:56:25.421879 kubelet[2642]: I1213 01:56:25.421849 2642 topology_manager.go:215] "Topology Admit Handler" podUID="23384bc4-d79f-486e-8b08-1aeb1618cefd" podNamespace="default" podName="test-pod-1" Dec 13 01:56:25.486690 kubelet[2642]: I1213 01:56:25.486630 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cce9d8ff-c49a-4722-b1c9-33335223e14b\" (UniqueName: \"kubernetes.io/nfs/23384bc4-d79f-486e-8b08-1aeb1618cefd-pvc-cce9d8ff-c49a-4722-b1c9-33335223e14b\") pod \"test-pod-1\" (UID: \"23384bc4-d79f-486e-8b08-1aeb1618cefd\") " pod="default/test-pod-1" Dec 13 01:56:25.486858 kubelet[2642]: I1213 01:56:25.486715 2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrww\" (UniqueName: \"kubernetes.io/projected/23384bc4-d79f-486e-8b08-1aeb1618cefd-kube-api-access-mnrww\") pod \"test-pod-1\" (UID: \"23384bc4-d79f-486e-8b08-1aeb1618cefd\") " pod="default/test-pod-1" Dec 13 01:56:25.541579 kubelet[2642]: E1213 01:56:25.541517 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:25.623437 kernel: FS-Cache: Loaded Dec 13 01:56:25.668153 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:56:25.668394 kernel: RPC: Registered udp transport module. Dec 13 01:56:25.668443 kernel: RPC: Registered tcp transport module. Dec 13 01:56:25.668926 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:56:25.669867 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:56:26.005094 kernel: NFS: Registering the id_resolver key type Dec 13 01:56:26.005285 kernel: Key type id_resolver registered Dec 13 01:56:26.005364 kernel: Key type id_legacy registered Dec 13 01:56:26.042129 nfsidmap[4412]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:56:26.048602 nfsidmap[4413]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:56:26.328196 containerd[2144]: time="2024-12-13T01:56:26.327759038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:23384bc4-d79f-486e-8b08-1aeb1618cefd,Namespace:default,Attempt:0,}" Dec 13 01:56:26.525072 (udev-worker)[4400]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:26.526866 systemd-networkd[1696]: cali5ec59c6bf6e: Link UP Dec 13 01:56:26.528260 systemd-networkd[1696]: cali5ec59c6bf6e: Gained carrier Dec 13 01:56:26.546928 kubelet[2642]: E1213 01:56:26.545935 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.415 [INFO][4416] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.28.238-k8s-test--pod--1-eth0 default 23384bc4-d79f-486e-8b08-1aeb1618cefd 1198 0 2024-12-13 01:55:54 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.28.238 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.238-k8s-test--pod--1-" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.415 [INFO][4416] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.238-k8s-test--pod--1-eth0" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.463 [INFO][4425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" HandleID="k8s-pod-network.bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Workload="172.31.28.238-k8s-test--pod--1-eth0" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.479 [INFO][4425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" HandleID="k8s-pod-network.bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Workload="172.31.28.238-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002631f0), Attrs:map[string]string{"namespace":"default", "node":"172.31.28.238", "pod":"test-pod-1", "timestamp":"2024-12-13 01:56:26.463051623 +0000 UTC"}, Hostname:"172.31.28.238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.479 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.479 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.479 [INFO][4425] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.28.238' Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.482 [INFO][4425] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" host="172.31.28.238" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.487 [INFO][4425] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.28.238" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.494 [INFO][4425] ipam/ipam.go 489: Trying affinity for 192.168.96.192/26 host="172.31.28.238" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.497 [INFO][4425] ipam/ipam.go 155: Attempting to load block cidr=192.168.96.192/26 host="172.31.28.238" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.500 [INFO][4425] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="172.31.28.238" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.500 [INFO][4425] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" host="172.31.28.238" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.502 [INFO][4425] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0 Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.508 [INFO][4425] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" host="172.31.28.238" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.519 [INFO][4425] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.96.196/26] block=192.168.96.192/26 handle="k8s-pod-network.bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" host="172.31.28.238" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.519 [INFO][4425] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.96.196/26] handle="k8s-pod-network.bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" host="172.31.28.238" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.519 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.519 [INFO][4425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.196/26] IPv6=[] ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" HandleID="k8s-pod-network.bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Workload="172.31.28.238-k8s-test--pod--1-eth0" Dec 13 01:56:26.549575 containerd[2144]: 2024-12-13 01:56:26.522 [INFO][4416] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.238-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"23384bc4-d79f-486e-8b08-1aeb1618cefd", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:26.552979 containerd[2144]: 2024-12-13 01:56:26.522 [INFO][4416] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.96.196/32] ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.238-k8s-test--pod--1-eth0" Dec 13 01:56:26.552979 containerd[2144]: 2024-12-13 01:56:26.522 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.238-k8s-test--pod--1-eth0" Dec 13 01:56:26.552979 containerd[2144]: 2024-12-13 01:56:26.527 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.238-k8s-test--pod--1-eth0" Dec 13 01:56:26.552979 containerd[2144]: 2024-12-13 01:56:26.530 [INFO][4416] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.238-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.28.238-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"23384bc4-d79f-486e-8b08-1aeb1618cefd", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 55, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.28.238", ContainerID:"bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"4a:00:0c:f6:7c:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:56:26.552979 containerd[2144]: 2024-12-13 01:56:26.544 [INFO][4416] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.28.238-k8s-test--pod--1-eth0" Dec 13 01:56:26.588950 containerd[2144]: time="2024-12-13T01:56:26.588551584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:26.588950 containerd[2144]: time="2024-12-13T01:56:26.588755644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:26.590233 containerd[2144]: time="2024-12-13T01:56:26.589922080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:26.591226 containerd[2144]: time="2024-12-13T01:56:26.590920060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:26.684473 containerd[2144]: time="2024-12-13T01:56:26.684419212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:23384bc4-d79f-486e-8b08-1aeb1618cefd,Namespace:default,Attempt:0,} returns sandbox id \"bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0\"" Dec 13 01:56:26.687469 containerd[2144]: time="2024-12-13T01:56:26.687276988Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:56:27.019168 containerd[2144]: time="2024-12-13T01:56:27.019091234Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:27.021163 containerd[2144]: time="2024-12-13T01:56:27.021089738Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:56:27.026744 containerd[2144]: time="2024-12-13T01:56:27.026579834Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 339.243434ms" Dec 13 01:56:27.026744 containerd[2144]: time="2024-12-13T01:56:27.026653658Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:56:27.029565 containerd[2144]: time="2024-12-13T01:56:27.029402066Z" level=info msg="CreateContainer within sandbox \"bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:56:27.062844 containerd[2144]: time="2024-12-13T01:56:27.062678870Z" level=info msg="CreateContainer within sandbox \"bfbb70d2e9e1f467d08b29944231594d22a979c018cc78144b90204f8216f8a0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"203a5e435aea999aed5be0e189afe98b0836c1cf04121b5af6bacc01343c84ec\"" Dec 13 01:56:27.064953 containerd[2144]: time="2024-12-13T01:56:27.063584978Z" level=info msg="StartContainer for \"203a5e435aea999aed5be0e189afe98b0836c1cf04121b5af6bacc01343c84ec\"" Dec 13 01:56:27.152547 containerd[2144]: time="2024-12-13T01:56:27.152483546Z" level=info msg="StartContainer for \"203a5e435aea999aed5be0e189afe98b0836c1cf04121b5af6bacc01343c84ec\" returns successfully" Dec 13 01:56:27.546169 kubelet[2642]: E1213 01:56:27.546117 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:27.660834 systemd-networkd[1696]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 01:56:27.947102 kubelet[2642]: I1213 01:56:27.946686 2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=33.606150604 podStartE2EDuration="33.946629474s" podCreationTimestamp="2024-12-13 01:55:54 +0000 UTC" firstStartedPulling="2024-12-13 01:56:26.68645584 +0000 UTC m=+76.229666636" lastFinishedPulling="2024-12-13 01:56:27.02693471 +0000 UTC m=+76.570145506" observedRunningTime="2024-12-13 01:56:27.946341474 +0000 UTC m=+77.489552282" watchObservedRunningTime="2024-12-13 01:56:27.946629474 +0000 UTC m=+77.489840294" Dec 13 01:56:28.546580 kubelet[2642]: E1213 01:56:28.546523 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:29.547760 kubelet[2642]: E1213 01:56:29.547701 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:30.440553 ntpd[2103]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:56:30.441160 ntpd[2103]: 13 Dec 01:56:30 ntpd[2103]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:56:30.548766 kubelet[2642]: E1213 01:56:30.548694 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:31.484833 kubelet[2642]: E1213 01:56:31.484758 2642 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:31.549504 kubelet[2642]: E1213 01:56:31.549444 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:32.550477 kubelet[2642]: E1213 01:56:32.550413 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:33.551357 kubelet[2642]: E1213 01:56:33.551292 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:34.551871 kubelet[2642]: E1213 01:56:34.551800 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:35.552016 kubelet[2642]: E1213 01:56:35.551954 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:36.552293 kubelet[2642]: E1213 01:56:36.552230 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:37.553440 kubelet[2642]: E1213 01:56:37.553342 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:38.553590 kubelet[2642]: E1213 01:56:38.553525 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:39.554213 kubelet[2642]: E1213 01:56:39.554154 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:40.555439 kubelet[2642]: E1213 01:56:40.555339 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:41.556290 kubelet[2642]: E1213 01:56:41.556224 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:42.557181 kubelet[2642]: E1213 01:56:42.557125 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:43.558165 kubelet[2642]: E1213 01:56:43.558095 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:43.735085 kubelet[2642]: E1213 01:56:43.735005 2642 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:56:44.558609 kubelet[2642]: E1213 01:56:44.558540 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:45.558947 kubelet[2642]: E1213 01:56:45.558889 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:46.559538 kubelet[2642]: E1213 01:56:46.559450 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:47.559987 kubelet[2642]: E1213 01:56:47.559917 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:48.560892 kubelet[2642]: E1213 01:56:48.560830 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:49.561671 kubelet[2642]: E1213 01:56:49.561610 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:50.562341 kubelet[2642]: E1213 01:56:50.562286 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:51.485401 kubelet[2642]: E1213 01:56:51.485339 2642 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:51.563230 kubelet[2642]: E1213 01:56:51.563164 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:52.563776 kubelet[2642]: E1213 01:56:52.563715 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:53.564508 kubelet[2642]: E1213 01:56:53.564412 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:53.736057 kubelet[2642]: E1213 01:56:53.735997 2642 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:56:54.565048 kubelet[2642]: E1213 01:56:54.564991 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:55.565954 kubelet[2642]: E1213 01:56:55.565892 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:56.567041 kubelet[2642]: E1213 01:56:56.566988 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:57.567349 kubelet[2642]: E1213 01:56:57.567278 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:58.568142 kubelet[2642]: E1213 01:56:58.568066 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:59.568828 kubelet[2642]: E1213 01:56:59.568726 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:00.569975 kubelet[2642]: E1213 01:57:00.569910 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:01.571117 kubelet[2642]: E1213 01:57:01.571050 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:02.572047 kubelet[2642]: E1213 01:57:02.571986 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:03.572973 kubelet[2642]: E1213 01:57:03.572903 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:03.736959 kubelet[2642]: E1213 01:57:03.736899 2642 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:57:04.573466 kubelet[2642]: E1213 01:57:04.573411 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:05.573770 kubelet[2642]: E1213 01:57:05.573709 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:06.574880 kubelet[2642]: E1213 01:57:06.574807 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:07.576046 kubelet[2642]: E1213 01:57:07.575980 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:08.576528 kubelet[2642]: E1213 01:57:08.576468 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:09.576951 kubelet[2642]: E1213 01:57:09.576880 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:10.577614 kubelet[2642]: E1213 01:57:10.577540 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:11.484532 kubelet[2642]: E1213 01:57:11.484476 2642 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:11.578726 kubelet[2642]: E1213 01:57:11.578646 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:12.579603 kubelet[2642]: E1213 01:57:12.579547 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:13.560934 kubelet[2642]: E1213 01:57:13.558132 2642 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": unexpected EOF" Dec 13 01:57:13.580428 kubelet[2642]: E1213 01:57:13.577025 2642 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": read tcp 172.31.28.238:38818->172.31.22.156:6443: read: connection reset by peer" Dec 13 01:57:13.584948 kubelet[2642]: I1213 01:57:13.584504 2642 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 01:57:13.584948 kubelet[2642]: E1213 01:57:13.581491 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:13.586444 kubelet[2642]: E1213 01:57:13.585528 2642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" interval="200ms" Dec 13 01:57:13.787272 kubelet[2642]: E1213 01:57:13.787218 2642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" interval="400ms" Dec 13 01:57:14.188933 kubelet[2642]: E1213 01:57:14.188870 2642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" interval="800ms" Dec 13 01:57:14.585552 kubelet[2642]: E1213 01:57:14.585438 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:14.637031 kubelet[2642]: E1213 01:57:14.636986 2642 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.238\": Get \"https://172.31.22.156:6443/api/v1/nodes/172.31.28.238?resourceVersion=0&timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" Dec 13 01:57:14.637743 kubelet[2642]: E1213 01:57:14.637519 2642 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.238\": Get \"https://172.31.22.156:6443/api/v1/nodes/172.31.28.238?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" Dec 13 01:57:14.638712 kubelet[2642]: E1213 01:57:14.638313 2642 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.238\": Get \"https://172.31.22.156:6443/api/v1/nodes/172.31.28.238?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" Dec 13 01:57:14.639167 kubelet[2642]: E1213 01:57:14.639086 2642 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.238\": Get \"https://172.31.22.156:6443/api/v1/nodes/172.31.28.238?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" Dec 13 01:57:14.639997 kubelet[2642]: E1213 01:57:14.639951 2642 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.28.238\": Get \"https://172.31.22.156:6443/api/v1/nodes/172.31.28.238?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" Dec 13 01:57:14.639997 kubelet[2642]: E1213 01:57:14.639991 2642 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Dec 13 01:57:14.990756 kubelet[2642]: E1213 01:57:14.990592 2642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" interval="1.6s" Dec 13 01:57:15.586339 kubelet[2642]: E1213 01:57:15.586270 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:16.086279 kubelet[2642]: E1213 01:57:16.086220 2642 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.156:6443/api/v1/namespaces/calico-system/events\": dial tcp 172.31.22.156:6443: connect: connection refused" event=< Dec 13 01:57:16.086279 kubelet[2642]: &Event{ObjectMeta:{calico-node-9wb5b.181099de2ed8824d calico-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-node-9wb5b,UID:5e51dc3a-bab0-4a41-bb37-4a5f139f010d,APIVersion:v1,ResourceVersion:770,FieldPath:spec.containers{calico-node},},Reason:Unhealthy,Message:Readiness probe failed: 2024-12-13 01:57:16.077 [INFO][408] node/health.go 202: Number of node(s) with BGP peering established = 0 Dec 13 01:57:16.086279 kubelet[2642]: calico/node is not ready: BIRD is not ready: BGP not established with 172.31.22.156 Dec 13 01:57:16.086279 kubelet[2642]: ,Source:EventSource{Component:kubelet,Host:172.31.28.238,},FirstTimestamp:2024-12-13 01:57:16.085371469 +0000 UTC m=+125.628582349,LastTimestamp:2024-12-13 01:57:16.085371469 +0000 UTC m=+125.628582349,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.238,} Dec 13 01:57:16.086279 kubelet[2642]: > Dec 13 01:57:16.587507 kubelet[2642]: E1213 01:57:16.587439 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:16.592304 kubelet[2642]: E1213 01:57:16.592254 2642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.238?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" interval="3.2s" Dec 13 01:57:17.588554 kubelet[2642]: E1213 01:57:17.588493 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:18.589621 kubelet[2642]: E1213 01:57:18.589555 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:19.590076 kubelet[2642]: E1213 01:57:19.590014 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:20.590474 kubelet[2642]: E1213 01:57:20.590416 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:21.591248 kubelet[2642]: E1213 01:57:21.591186 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:22.592300 kubelet[2642]: E1213 01:57:22.592239 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:23.592775 kubelet[2642]: E1213 01:57:23.592710 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:24.593702 kubelet[2642]: E1213 01:57:24.593644 2642 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"