Dec 13 01:53:26.188755 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:53:26.188800 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:53:26.188825 kernel: KASLR disabled due to lack of seed Dec 13 01:53:26.188841 kernel: efi: EFI v2.7 by EDK II Dec 13 01:53:26.188857 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:53:26.188873 kernel: ACPI: Early table checksum verification disabled Dec 13 01:53:26.188890 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:53:26.188906 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:53:26.188921 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:53:26.188937 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:53:26.188957 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:53:26.188973 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:53:26.188988 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:53:26.189004 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:53:26.189022 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:53:26.189043 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:53:26.189060 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:53:26.189076 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:53:26.189092 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:53:26.189133 kernel: printk: bootconsole [uart0] enabled Dec 13 01:53:26.189156 kernel: NUMA: Failed to initialise from firmware Dec 13 01:53:26.189175 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:53:26.189192 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:53:26.189209 kernel: Zone ranges: Dec 13 01:53:26.189226 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:53:26.189243 kernel: DMA32 empty Dec 13 01:53:26.189266 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:53:26.189283 kernel: Movable zone start for each node Dec 13 01:53:26.189300 kernel: Early memory node ranges Dec 13 01:53:26.189316 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:53:26.189333 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:53:26.189349 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:53:26.189365 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:53:26.189381 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:53:26.189398 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:53:26.189415 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:53:26.189431 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:53:26.189448 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:53:26.189470 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:53:26.189487 kernel: psci: probing for conduit method from ACPI. Dec 13 01:53:26.189510 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:53:26.189528 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:53:26.189545 kernel: psci: Trusted OS migration not required Dec 13 01:53:26.189567 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:53:26.189584 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:53:26.189602 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:53:26.189619 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:53:26.189636 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:53:26.189653 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:53:26.189671 kernel: CPU features: detected: Spectre-v2 Dec 13 01:53:26.189688 kernel: CPU features: detected: Spectre-v3a Dec 13 01:53:26.189705 kernel: CPU features: detected: Spectre-BHB Dec 13 01:53:26.189723 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:53:26.189740 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:53:26.189762 kernel: alternatives: applying boot alternatives Dec 13 01:53:26.189782 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:53:26.189801 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:53:26.189818 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:53:26.189837 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:53:26.189854 kernel: Fallback order for Node 0: 0 Dec 13 01:53:26.189872 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:53:26.189890 kernel: Policy zone: Normal Dec 13 01:53:26.189908 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:53:26.189925 kernel: software IO TLB: area num 2. Dec 13 01:53:26.189943 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:53:26.189966 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:53:26.189984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:53:26.190001 kernel: trace event string verifier disabled Dec 13 01:53:26.190018 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:53:26.190037 kernel: rcu: RCU event tracing is enabled. Dec 13 01:53:26.190055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:53:26.190073 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:53:26.190092 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:53:26.193151 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:53:26.193199 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:53:26.193219 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:53:26.193248 kernel: GICv3: 96 SPIs implemented Dec 13 01:53:26.193267 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:53:26.193284 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:53:26.193301 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:53:26.193318 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:53:26.193335 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:53:26.193352 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:53:26.193370 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:53:26.193387 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:53:26.193404 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:53:26.193421 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:53:26.193439 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:53:26.193460 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:53:26.193478 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:53:26.193496 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:53:26.193513 kernel: Console: colour dummy device 80x25 Dec 13 01:53:26.193531 kernel: printk: console [tty1] enabled Dec 13 01:53:26.193549 kernel: ACPI: Core revision 20230628 Dec 13 01:53:26.193567 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:53:26.193584 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:53:26.193602 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:53:26.193619 kernel: landlock: Up and running. Dec 13 01:53:26.193641 kernel: SELinux: Initializing. Dec 13 01:53:26.193658 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:26.193676 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:26.193694 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:53:26.193712 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:53:26.193729 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:53:26.193748 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:53:26.193766 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:53:26.193787 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:53:26.193805 kernel: Remapping and enabling EFI services. Dec 13 01:53:26.193822 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:53:26.193839 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:53:26.193857 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:53:26.193875 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:53:26.193892 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:53:26.193910 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:53:26.193927 kernel: SMP: Total of 2 processors activated. Dec 13 01:53:26.193944 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:53:26.193966 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:53:26.193983 kernel: CPU features: detected: CRC32 instructions Dec 13 01:53:26.194012 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:53:26.194035 kernel: alternatives: applying system-wide alternatives Dec 13 01:53:26.194053 kernel: devtmpfs: initialized Dec 13 01:53:26.194071 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:53:26.194089 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:53:26.194107 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:53:26.194150 kernel: SMBIOS 3.0.0 present. Dec 13 01:53:26.194175 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:53:26.194194 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:53:26.194212 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:53:26.194231 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:53:26.194250 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:53:26.194268 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:53:26.194286 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Dec 13 01:53:26.194309 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:53:26.194348 kernel: cpuidle: using governor menu Dec 13 01:53:26.194367 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:53:26.194386 kernel: ASID allocator initialised with 65536 entries Dec 13 01:53:26.194404 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:53:26.194423 kernel: Serial: AMBA PL011 UART driver Dec 13 01:53:26.194441 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:53:26.194460 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:53:26.194479 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:53:26.194503 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:53:26.194523 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:53:26.194541 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:53:26.194560 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:53:26.194578 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:53:26.194597 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:53:26.194615 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:53:26.194634 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:53:26.194652 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:53:26.194675 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:53:26.194694 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:53:26.194713 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:53:26.194731 kernel: ACPI: Interpreter enabled Dec 13 01:53:26.194750 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:53:26.194768 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:53:26.194787 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:53:26.196348 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:53:26.196627 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:53:26.196827 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:53:26.197021 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:53:26.197287 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:53:26.197321 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:53:26.197342 kernel: acpiphp: Slot [1] registered Dec 13 01:53:26.197361 kernel: acpiphp: Slot [2] registered Dec 13 01:53:26.197380 kernel: acpiphp: Slot [3] registered Dec 13 01:53:26.197409 kernel: acpiphp: Slot [4] registered Dec 13 01:53:26.197428 kernel: acpiphp: Slot [5] registered Dec 13 01:53:26.197448 kernel: acpiphp: Slot [6] registered Dec 13 01:53:26.197467 kernel: acpiphp: Slot [7] registered Dec 13 01:53:26.197485 kernel: acpiphp: Slot [8] registered Dec 13 01:53:26.197503 kernel: acpiphp: Slot [9] registered Dec 13 01:53:26.197522 kernel: acpiphp: Slot [10] registered Dec 13 01:53:26.197541 kernel: acpiphp: Slot [11] registered Dec 13 01:53:26.197559 kernel: acpiphp: Slot [12] registered Dec 13 01:53:26.197578 kernel: acpiphp: Slot [13] registered Dec 13 01:53:26.197601 kernel: acpiphp: Slot [14] registered Dec 13 01:53:26.197619 kernel: acpiphp: Slot [15] registered Dec 13 01:53:26.197637 kernel: acpiphp: Slot [16] registered Dec 13 01:53:26.197655 kernel: acpiphp: Slot [17] registered Dec 13 01:53:26.197674 kernel: acpiphp: Slot [18] registered Dec 13 01:53:26.197693 kernel: acpiphp: Slot [19] registered Dec 13 01:53:26.197712 kernel: acpiphp: Slot [20] registered Dec 13 01:53:26.197730 kernel: acpiphp: Slot [21] registered Dec 13 01:53:26.197749 kernel: acpiphp: Slot [22] registered Dec 13 01:53:26.197772 kernel: acpiphp: Slot [23] registered Dec 13 01:53:26.197791 kernel: acpiphp: Slot [24] registered Dec 13 01:53:26.197809 kernel: acpiphp: Slot [25] registered Dec 13 01:53:26.197827 kernel: acpiphp: Slot [26] registered Dec 13 01:53:26.197846 kernel: acpiphp: Slot [27] registered Dec 13 01:53:26.197865 kernel: acpiphp: Slot [28] registered Dec 13 01:53:26.197885 kernel: acpiphp: Slot [29] registered Dec 13 01:53:26.197905 kernel: acpiphp: Slot [30] registered Dec 13 01:53:26.197924 kernel: acpiphp: Slot [31] registered Dec 13 01:53:26.197943 kernel: PCI host bridge to bus 0000:00 Dec 13 01:53:26.201349 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:53:26.201701 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:53:26.201896 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:53:26.202078 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:53:26.202357 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:53:26.202594 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:53:26.202842 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:53:26.203093 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:53:26.204476 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:53:26.204689 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:53:26.204909 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:53:26.205136 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:53:26.206444 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:53:26.206667 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:53:26.206868 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:53:26.207065 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:53:26.208590 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:53:26.208810 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:53:26.209011 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:53:26.209255 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:53:26.209474 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:53:26.209716 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:53:26.209901 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:53:26.209928 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:53:26.209948 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:53:26.209967 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:53:26.209986 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:53:26.210004 kernel: iommu: Default domain type: Translated Dec 13 01:53:26.210030 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:53:26.210048 kernel: efivars: Registered efivars operations Dec 13 01:53:26.210067 kernel: vgaarb: loaded Dec 13 01:53:26.210085 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:53:26.210104 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:53:26.211224 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:53:26.211248 kernel: pnp: PnP ACPI init Dec 13 01:53:26.211563 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:53:26.211599 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:53:26.211619 kernel: NET: Registered PF_INET protocol family Dec 13 01:53:26.211638 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:53:26.211657 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:53:26.211676 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:53:26.211694 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:53:26.211713 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:53:26.211731 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:53:26.211751 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:26.211775 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:26.211793 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:53:26.211812 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:53:26.211830 kernel: kvm [1]: HYP mode not available Dec 13 01:53:26.211848 kernel: Initialise system trusted keyrings Dec 13 01:53:26.211867 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:53:26.211886 kernel: Key type asymmetric registered Dec 13 01:53:26.211904 kernel: Asymmetric key parser 'x509' registered Dec 13 01:53:26.211922 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:53:26.211944 kernel: io scheduler mq-deadline registered Dec 13 01:53:26.211962 kernel: io scheduler kyber registered Dec 13 01:53:26.211981 kernel: io scheduler bfq registered Dec 13 01:53:26.212255 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:53:26.212285 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:53:26.212304 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:53:26.212323 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:53:26.212342 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:53:26.212366 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:53:26.212386 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:53:26.212589 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:53:26.212616 kernel: printk: console [ttyS0] disabled Dec 13 01:53:26.212635 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:53:26.212654 kernel: printk: console [ttyS0] enabled Dec 13 01:53:26.212672 kernel: printk: bootconsole [uart0] disabled Dec 13 01:53:26.212690 kernel: thunder_xcv, ver 1.0 Dec 13 01:53:26.212708 kernel: thunder_bgx, ver 1.0 Dec 13 01:53:26.212726 kernel: nicpf, ver 1.0 Dec 13 01:53:26.212750 kernel: nicvf, ver 1.0 Dec 13 01:53:26.212955 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:53:26.213575 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:53:25 UTC (1734054805) Dec 13 01:53:26.213614 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:53:26.213634 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:53:26.213655 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:53:26.213675 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:53:26.213703 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:53:26.213723 kernel: Segment Routing with IPv6 Dec 13 01:53:26.213742 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:53:26.213761 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:53:26.213781 kernel: Key type dns_resolver registered Dec 13 01:53:26.213800 kernel: registered taskstats version 1 Dec 13 01:53:26.213821 kernel: Loading compiled-in X.509 certificates Dec 13 01:53:26.213840 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:53:26.213859 kernel: Key type .fscrypt registered Dec 13 01:53:26.213878 kernel: Key type fscrypt-provisioning registered Dec 13 01:53:26.213903 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:53:26.213923 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:53:26.213941 kernel: ima: No architecture policies found Dec 13 01:53:26.213962 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:53:26.213981 kernel: clk: Disabling unused clocks Dec 13 01:53:26.214000 kernel: Freeing unused kernel memory: 39360K Dec 13 01:53:26.214018 kernel: Run /init as init process Dec 13 01:53:26.214038 kernel: with arguments: Dec 13 01:53:26.214057 kernel: /init Dec 13 01:53:26.214079 kernel: with environment: Dec 13 01:53:26.214098 kernel: HOME=/ Dec 13 01:53:26.214202 kernel: TERM=linux Dec 13 01:53:26.214227 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:53:26.214252 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:53:26.214276 systemd[1]: Detected virtualization amazon. Dec 13 01:53:26.214297 systemd[1]: Detected architecture arm64. Dec 13 01:53:26.214343 systemd[1]: Running in initrd. Dec 13 01:53:26.214365 systemd[1]: No hostname configured, using default hostname. Dec 13 01:53:26.214385 systemd[1]: Hostname set to . Dec 13 01:53:26.214407 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:26.214427 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:53:26.214447 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:26.214468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:26.214489 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:53:26.214515 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:53:26.214536 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:53:26.214557 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:53:26.214581 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:53:26.214603 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:53:26.214623 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:26.214643 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:26.214667 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:53:26.214688 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:53:26.214708 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:53:26.214728 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:53:26.214749 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:53:26.214769 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:53:26.214790 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:53:26.214811 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:53:26.214832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:26.214858 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:26.214879 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:26.214902 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:53:26.214923 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:53:26.214943 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:53:26.214964 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:53:26.214985 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:53:26.215007 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:53:26.215033 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:53:26.215054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:26.215074 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:53:26.215261 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 01:53:26.215325 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:26.215348 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:53:26.215372 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:53:26.215395 systemd-journald[251]: Journal started Dec 13 01:53:26.215443 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2b5c257f6dc399b0c383880c08eb65) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:53:26.196399 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 01:53:26.234547 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:53:26.238156 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:53:26.240472 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 01:53:26.242270 kernel: Bridge firewalling registered Dec 13 01:53:26.244164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:26.249052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:26.264425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:26.274732 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:53:26.290676 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:53:26.308094 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:26.323332 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:26.341734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:53:26.352054 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:26.357769 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:26.368494 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:53:26.380341 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:53:26.383491 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:26.430150 dracut-cmdline[286]: dracut-dracut-053 Dec 13 01:53:26.435143 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:53:26.484605 systemd-resolved[288]: Positive Trust Anchors: Dec 13 01:53:26.488402 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:26.488474 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:53:26.570285 kernel: SCSI subsystem initialized Dec 13 01:53:26.578265 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:53:26.591288 kernel: iscsi: registered transport (tcp) Dec 13 01:53:26.613565 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:53:26.613636 kernel: QLogic iSCSI HBA Driver Dec 13 01:53:26.699158 kernel: random: crng init done Dec 13 01:53:26.699464 systemd-resolved[288]: Defaulting to hostname 'linux'. Dec 13 01:53:26.705499 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:53:26.705788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:26.724742 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:53:26.736421 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:53:26.770505 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:53:26.770586 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:53:26.770614 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:53:26.840157 kernel: raid6: neonx8 gen() 6767 MB/s Dec 13 01:53:26.857145 kernel: raid6: neonx4 gen() 6566 MB/s Dec 13 01:53:26.874146 kernel: raid6: neonx2 gen() 5462 MB/s Dec 13 01:53:26.891148 kernel: raid6: neonx1 gen() 3946 MB/s Dec 13 01:53:26.908145 kernel: raid6: int64x8 gen() 3829 MB/s Dec 13 01:53:26.925145 kernel: raid6: int64x4 gen() 3725 MB/s Dec 13 01:53:26.942145 kernel: raid6: int64x2 gen() 3616 MB/s Dec 13 01:53:26.959915 kernel: raid6: int64x1 gen() 2758 MB/s Dec 13 01:53:26.959949 kernel: raid6: using algorithm neonx8 gen() 6767 MB/s Dec 13 01:53:26.977931 kernel: raid6: .... xor() 4816 MB/s, rmw enabled Dec 13 01:53:26.977999 kernel: raid6: using neon recovery algorithm Dec 13 01:53:26.986672 kernel: xor: measuring software checksum speed Dec 13 01:53:26.986749 kernel: 8regs : 10968 MB/sec Dec 13 01:53:26.987799 kernel: 32regs : 11941 MB/sec Dec 13 01:53:26.989920 kernel: arm64_neon : 9072 MB/sec Dec 13 01:53:26.989967 kernel: xor: using function: 32regs (11941 MB/sec) Dec 13 01:53:27.075160 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:53:27.094665 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:53:27.104419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:27.145282 systemd-udevd[471]: Using default interface naming scheme 'v255'. Dec 13 01:53:27.155700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:27.167997 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:53:27.203646 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Dec 13 01:53:27.261576 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:53:27.271541 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:53:27.393491 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:27.424887 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:53:27.472157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:53:27.477647 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:53:27.483276 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:27.500586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:53:27.511428 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:53:27.552577 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:53:27.603715 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:53:27.603823 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:53:27.635362 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:53:27.635647 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:53:27.635885 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:47:2b:b3:4a:01 Dec 13 01:53:27.603713 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:53:27.603975 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:27.607002 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:27.609133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:53:27.609392 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:27.664824 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:53:27.614179 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:27.627214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:27.671227 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:53:27.634375 (udev-worker)[529]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:27.681201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:27.687141 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:53:27.687495 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:27.701638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:53:27.701717 kernel: GPT:9289727 != 16777215 Dec 13 01:53:27.701743 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:53:27.702764 kernel: GPT:9289727 != 16777215 Dec 13 01:53:27.703427 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:53:27.705154 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:27.732690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:27.786171 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (519) Dec 13 01:53:27.807173 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (535) Dec 13 01:53:27.905022 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:53:27.934611 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:53:27.952070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:53:27.965743 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:53:27.968634 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:53:27.989542 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:53:28.002625 disk-uuid[662]: Primary Header is updated. Dec 13 01:53:28.002625 disk-uuid[662]: Secondary Entries is updated. Dec 13 01:53:28.002625 disk-uuid[662]: Secondary Header is updated. Dec 13 01:53:28.013175 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:28.021159 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:28.031160 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:29.040143 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:29.040808 disk-uuid[663]: The operation has completed successfully. Dec 13 01:53:29.232087 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:53:29.232332 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:53:29.290442 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:53:29.314098 sh[1006]: Success Dec 13 01:53:29.338155 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:53:29.455490 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:53:29.470340 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:53:29.490203 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:53:29.521442 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:53:29.521505 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:29.523254 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:53:29.524482 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:53:29.525536 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:53:29.551155 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:53:29.555617 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:53:29.558949 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:53:29.568395 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:53:29.580419 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:53:29.621272 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:29.621384 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:29.622607 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:29.642162 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:29.662444 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:29.661615 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:53:29.673097 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:53:29.685448 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:53:29.755967 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:53:29.766396 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:53:29.833786 systemd-networkd[1209]: lo: Link UP Dec 13 01:53:29.833807 systemd-networkd[1209]: lo: Gained carrier Dec 13 01:53:29.836553 systemd-networkd[1209]: Enumeration completed Dec 13 01:53:29.837347 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:53:29.840458 systemd[1]: Reached target network.target - Network. Dec 13 01:53:29.842392 systemd-networkd[1209]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:29.842400 systemd-networkd[1209]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:29.862091 systemd-networkd[1209]: eth0: Link UP Dec 13 01:53:29.862104 systemd-networkd[1209]: eth0: Gained carrier Dec 13 01:53:29.862160 systemd-networkd[1209]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:29.874881 systemd-networkd[1209]: eth0: DHCPv4 address 172.31.19.221/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:53:29.922223 ignition[1145]: Ignition 2.19.0 Dec 13 01:53:29.922250 ignition[1145]: Stage: fetch-offline Dec 13 01:53:29.923395 ignition[1145]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:29.923420 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:29.924139 ignition[1145]: Ignition finished successfully Dec 13 01:53:29.929870 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:53:29.940446 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:53:29.972923 ignition[1221]: Ignition 2.19.0 Dec 13 01:53:29.972955 ignition[1221]: Stage: fetch Dec 13 01:53:29.974243 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:29.974270 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:29.974604 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:29.985791 ignition[1221]: PUT result: OK Dec 13 01:53:29.989027 ignition[1221]: parsed url from cmdline: "" Dec 13 01:53:29.989248 ignition[1221]: no config URL provided Dec 13 01:53:29.989274 ignition[1221]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:53:29.989302 ignition[1221]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:53:29.989336 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:29.996905 ignition[1221]: PUT result: OK Dec 13 01:53:29.996992 ignition[1221]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:53:29.998465 ignition[1221]: GET result: OK Dec 13 01:53:30.008785 unknown[1221]: fetched base config from "system" Dec 13 01:53:29.998611 ignition[1221]: parsing config with SHA512: 41df5b9a5f8a4f730790cc23328704f2e73c2910187df21593b130384bfb5bdea50f07759dc21ed356ef93cce2892c87e2091f569bb3e36240d8a31af6f3957e Dec 13 01:53:30.008801 unknown[1221]: fetched base config from "system" Dec 13 01:53:30.009552 ignition[1221]: fetch: fetch complete Dec 13 01:53:30.008815 unknown[1221]: fetched user config from "aws" Dec 13 01:53:30.009563 ignition[1221]: fetch: fetch passed Dec 13 01:53:30.018216 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:53:30.009638 ignition[1221]: Ignition finished successfully Dec 13 01:53:30.035837 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:53:30.064462 ignition[1227]: Ignition 2.19.0 Dec 13 01:53:30.064950 ignition[1227]: Stage: kargs Dec 13 01:53:30.065859 ignition[1227]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:30.065910 ignition[1227]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:30.066743 ignition[1227]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:30.069362 ignition[1227]: PUT result: OK Dec 13 01:53:30.079778 ignition[1227]: kargs: kargs passed Dec 13 01:53:30.079877 ignition[1227]: Ignition finished successfully Dec 13 01:53:30.083522 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:53:30.094523 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:53:30.132538 ignition[1233]: Ignition 2.19.0 Dec 13 01:53:30.132570 ignition[1233]: Stage: disks Dec 13 01:53:30.133346 ignition[1233]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:30.133370 ignition[1233]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:30.133522 ignition[1233]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:30.135693 ignition[1233]: PUT result: OK Dec 13 01:53:30.145454 ignition[1233]: disks: disks passed Dec 13 01:53:30.145559 ignition[1233]: Ignition finished successfully Dec 13 01:53:30.149268 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:53:30.154400 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:53:30.157285 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:53:30.164276 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:53:30.166387 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:53:30.171732 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:53:30.188486 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:53:30.233362 systemd-fsck[1241]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:53:30.239850 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:53:30.251291 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:53:30.350144 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:53:30.351434 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:53:30.354262 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:53:30.375465 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:53:30.382364 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:53:30.384581 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:53:30.384663 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:53:30.384716 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:53:30.408334 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1260) Dec 13 01:53:30.414899 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:30.414982 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:30.417226 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:30.420173 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:53:30.437738 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:53:30.444024 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:30.446980 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:53:30.527617 initrd-setup-root[1284]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:53:30.536441 initrd-setup-root[1291]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:53:30.546562 initrd-setup-root[1298]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:53:30.555690 initrd-setup-root[1305]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:53:30.733074 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:53:30.742358 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:53:30.755845 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:53:30.773213 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:53:30.775572 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:30.814656 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:53:30.824361 ignition[1373]: INFO : Ignition 2.19.0 Dec 13 01:53:30.824361 ignition[1373]: INFO : Stage: mount Dec 13 01:53:30.827553 ignition[1373]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:30.827553 ignition[1373]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:30.831637 ignition[1373]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:30.834420 ignition[1373]: INFO : PUT result: OK Dec 13 01:53:30.839029 ignition[1373]: INFO : mount: mount passed Dec 13 01:53:30.840702 ignition[1373]: INFO : Ignition finished successfully Dec 13 01:53:30.845186 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:53:30.857297 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:53:30.882664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:53:30.908150 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1384) Dec 13 01:53:30.912067 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:30.912137 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:30.913241 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:30.919141 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:30.923025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:53:30.957082 ignition[1401]: INFO : Ignition 2.19.0 Dec 13 01:53:30.957082 ignition[1401]: INFO : Stage: files Dec 13 01:53:30.960342 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:30.960342 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:30.960342 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:30.967058 ignition[1401]: INFO : PUT result: OK Dec 13 01:53:30.971560 ignition[1401]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:53:30.975241 ignition[1401]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:53:30.977808 ignition[1401]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:53:30.985749 ignition[1401]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:53:30.989692 ignition[1401]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:53:30.992500 unknown[1401]: wrote ssh authorized keys file for user: core Dec 13 01:53:30.994966 ignition[1401]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:53:30.999692 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:53:30.999692 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:53:31.088890 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:53:31.235000 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:53:31.235000 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:53:31.243447 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 01:53:31.645287 systemd-networkd[1209]: eth0: Gained IPv6LL Dec 13 01:53:31.736589 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:53:32.121600 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:53:32.121600 ignition[1401]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:53:32.128832 ignition[1401]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:32.133626 ignition[1401]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:32.133626 ignition[1401]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:53:32.133626 ignition[1401]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:32.133626 ignition[1401]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:32.148382 ignition[1401]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:32.148382 ignition[1401]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:32.148382 ignition[1401]: INFO : files: files passed Dec 13 01:53:32.148382 ignition[1401]: INFO : Ignition finished successfully Dec 13 01:53:32.139836 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:53:32.165466 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:53:32.188381 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:53:32.191235 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:53:32.191478 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:53:32.212448 initrd-setup-root-after-ignition[1429]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:32.212448 initrd-setup-root-after-ignition[1429]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:32.218579 initrd-setup-root-after-ignition[1433]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:32.225951 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:53:32.233483 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:53:32.243423 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:53:32.309346 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:53:32.309766 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:53:32.318616 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:53:32.321281 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:53:32.323249 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:53:32.335439 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:53:32.371591 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:53:32.385625 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:53:32.418932 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:32.423329 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:32.424657 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:53:32.425225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:53:32.425455 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:53:32.426040 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:53:32.426669 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:53:32.426976 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:53:32.427297 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:53:32.427575 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:53:32.427891 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:53:32.428197 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:53:32.428776 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:53:32.429090 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:53:32.429665 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:53:32.429910 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:53:32.430136 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:53:32.430831 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:32.431187 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:32.431377 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:53:32.453750 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:32.456363 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:53:32.456592 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:53:32.467643 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:53:32.467899 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:53:32.472376 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:53:32.472615 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:53:32.512534 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:53:32.531218 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:53:32.532959 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:53:32.533280 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:32.535839 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:53:32.536070 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:53:32.557686 ignition[1453]: INFO : Ignition 2.19.0 Dec 13 01:53:32.564764 ignition[1453]: INFO : Stage: umount Dec 13 01:53:32.564764 ignition[1453]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:32.564764 ignition[1453]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:32.564764 ignition[1453]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:32.564764 ignition[1453]: INFO : PUT result: OK Dec 13 01:53:32.564764 ignition[1453]: INFO : umount: umount passed Dec 13 01:53:32.564764 ignition[1453]: INFO : Ignition finished successfully Dec 13 01:53:32.582156 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:53:32.584033 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:53:32.591619 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:53:32.593935 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:53:32.599498 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:53:32.601632 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:53:32.618509 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:53:32.618606 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:53:32.627005 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:53:32.627129 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:53:32.633667 systemd[1]: Stopped target network.target - Network. Dec 13 01:53:32.635286 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:53:32.635388 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:53:32.637804 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:53:32.644029 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:53:32.649698 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:32.651934 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:53:32.653559 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:53:32.655322 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:53:32.655403 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:53:32.657225 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:53:32.657291 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:53:32.659148 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:53:32.659238 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:53:32.661073 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:53:32.661171 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:53:32.663585 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:53:32.667493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:53:32.673881 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:53:32.675415 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:53:32.677659 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:53:32.685051 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:53:32.687423 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:53:32.692212 systemd-networkd[1209]: eth0: DHCPv6 lease lost Dec 13 01:53:32.694173 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:53:32.694356 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:53:32.700339 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:53:32.700449 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:32.705837 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:53:32.707586 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:53:32.715440 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:53:32.715552 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:32.739590 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:53:32.762608 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:53:32.762719 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:53:32.765036 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:53:32.765148 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:32.767322 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:53:32.767400 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:32.771855 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:32.797766 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:53:32.798782 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:32.812472 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:53:32.812628 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:32.818051 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:53:32.818208 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:32.821815 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:53:32.825666 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:53:32.827823 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:53:32.827907 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:53:32.829996 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:53:32.830078 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:32.848514 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:53:32.853588 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:53:32.853708 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:32.856134 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:53:32.856222 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:32.858799 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:53:32.858874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:32.861668 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:53:32.861747 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:32.864828 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:53:32.865156 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:53:32.898796 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:53:32.902054 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:53:32.904929 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:53:32.919528 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:53:32.940161 systemd[1]: Switching root. Dec 13 01:53:32.979326 systemd-journald[251]: Journal stopped Dec 13 01:53:34.969802 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 01:53:34.969927 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:53:34.969973 kernel: SELinux: policy capability open_perms=1 Dec 13 01:53:34.970007 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:53:34.970043 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:53:34.970079 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:53:34.970149 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:53:34.970183 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:53:34.970220 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:53:34.970252 kernel: audit: type=1403 audit(1734054813.327:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:53:34.970324 systemd[1]: Successfully loaded SELinux policy in 47.805ms. Dec 13 01:53:34.970388 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.549ms. Dec 13 01:53:34.970428 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:53:34.970463 systemd[1]: Detected virtualization amazon. Dec 13 01:53:34.970502 systemd[1]: Detected architecture arm64. Dec 13 01:53:34.970533 systemd[1]: Detected first boot. Dec 13 01:53:34.970567 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:34.970601 zram_generator::config[1495]: No configuration found. Dec 13 01:53:34.970639 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:53:34.970674 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:53:34.970706 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:53:34.970739 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:53:34.970775 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:53:34.970808 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:53:34.970837 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:53:34.970869 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:53:34.970903 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:53:34.970935 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:53:34.970965 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:53:34.971005 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:53:34.971038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:34.971072 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:34.971104 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:53:34.973252 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:53:34.973295 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:53:34.973331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:53:34.973366 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:53:34.973399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:34.973429 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:53:34.973461 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:53:34.973509 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:53:34.973540 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:53:34.973570 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:34.973603 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:53:34.973632 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:53:34.973665 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:53:34.973695 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:53:34.973725 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:53:34.973762 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:34.973792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:34.973825 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:34.973855 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:53:34.973885 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:53:34.973918 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:53:34.973953 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:53:34.973988 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:53:34.974023 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:53:34.974061 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:53:34.974093 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:53:34.974169 systemd[1]: Reached target machines.target - Containers. Dec 13 01:53:34.974202 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:53:34.974236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:34.974289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:53:34.974328 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:53:34.974363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:53:34.974403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:53:34.974433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:53:34.974464 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:53:34.974494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:53:34.974526 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:53:34.974557 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:53:34.974587 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:53:34.974616 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:53:34.974646 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:53:34.974680 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:53:34.974712 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:53:34.974742 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:53:34.974773 kernel: loop: module loaded Dec 13 01:53:34.974803 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:53:34.974833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:53:34.974864 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:53:34.974896 systemd[1]: Stopped verity-setup.service. Dec 13 01:53:34.974926 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:53:34.974961 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:53:34.974991 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:53:34.975023 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:53:34.975055 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:53:34.975085 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:53:34.979159 kernel: ACPI: bus type drm_connector registered Dec 13 01:53:34.979229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:34.979263 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:53:34.979293 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:53:34.979324 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:34.979353 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:53:34.979383 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:34.979415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:53:34.979450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:34.979479 kernel: fuse: init (API version 7.39) Dec 13 01:53:34.979510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:53:34.979542 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:34.979575 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:53:34.979609 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:53:34.979646 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:53:34.979676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:34.979707 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:53:34.979736 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:53:34.979767 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:53:34.979798 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:53:34.979832 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:53:34.979863 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:53:34.979900 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:53:34.979933 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:53:34.979963 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:53:34.980043 systemd-journald[1573]: Collecting audit messages is disabled. Dec 13 01:53:34.980104 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:53:34.980194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:34.980238 systemd-journald[1573]: Journal started Dec 13 01:53:34.980298 systemd-journald[1573]: Runtime Journal (/run/log/journal/ec2b5c257f6dc399b0c383880c08eb65) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:53:34.311508 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:53:34.337538 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:53:34.986249 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:53:34.986337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:34.338399 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:53:35.007924 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:53:35.008009 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:53:35.041357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:53:35.041456 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:53:35.061678 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:53:35.071176 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:53:35.079775 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:53:35.084596 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:53:35.088067 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:53:35.101237 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:53:35.104575 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:53:35.177194 kernel: loop0: detected capacity change from 0 to 52536 Dec 13 01:53:35.185811 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:53:35.206425 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:53:35.232181 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:53:35.236217 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:35.255817 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Dec 13 01:53:35.255850 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Dec 13 01:53:35.274126 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:53:35.272590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:35.287938 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:53:35.305294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:35.318669 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:53:35.326550 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:53:35.328840 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:53:35.337403 kernel: loop1: detected capacity change from 0 to 114432 Dec 13 01:53:35.337651 systemd-journald[1573]: Time spent on flushing to /var/log/journal/ec2b5c257f6dc399b0c383880c08eb65 is 57.198ms for 924 entries. Dec 13 01:53:35.337651 systemd-journald[1573]: System Journal (/var/log/journal/ec2b5c257f6dc399b0c383880c08eb65) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:53:35.416797 systemd-journald[1573]: Received client request to flush runtime journal. Dec 13 01:53:35.416863 kernel: loop2: detected capacity change from 0 to 189592 Dec 13 01:53:35.364686 udevadm[1641]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:53:35.396225 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:53:35.411449 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:53:35.422884 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:53:35.495997 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Dec 13 01:53:35.496038 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Dec 13 01:53:35.518666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:35.545168 kernel: loop3: detected capacity change from 0 to 114328 Dec 13 01:53:35.596218 kernel: loop4: detected capacity change from 0 to 52536 Dec 13 01:53:35.626158 kernel: loop5: detected capacity change from 0 to 114432 Dec 13 01:53:35.661300 kernel: loop6: detected capacity change from 0 to 189592 Dec 13 01:53:35.707367 kernel: loop7: detected capacity change from 0 to 114328 Dec 13 01:53:35.735043 (sd-merge)[1653]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:53:35.739587 (sd-merge)[1653]: Merged extensions into '/usr'. Dec 13 01:53:35.752458 systemd[1]: Reloading requested from client PID 1606 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:53:35.752822 systemd[1]: Reloading... Dec 13 01:53:36.002158 zram_generator::config[1682]: No configuration found. Dec 13 01:53:36.097258 ldconfig[1599]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:53:36.325082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:36.445571 systemd[1]: Reloading finished in 691 ms. Dec 13 01:53:36.484218 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:53:36.488205 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:53:36.491174 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:53:36.507601 systemd[1]: Starting ensure-sysext.service... Dec 13 01:53:36.518361 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:53:36.525443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:36.547221 systemd[1]: Reloading requested from client PID 1732 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:53:36.547607 systemd[1]: Reloading... Dec 13 01:53:36.576523 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:53:36.577558 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:53:36.579631 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:53:36.580715 systemd-tmpfiles[1733]: ACLs are not supported, ignoring. Dec 13 01:53:36.580952 systemd-tmpfiles[1733]: ACLs are not supported, ignoring. Dec 13 01:53:36.590661 systemd-tmpfiles[1733]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:53:36.590680 systemd-tmpfiles[1733]: Skipping /boot Dec 13 01:53:36.619080 systemd-tmpfiles[1733]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:53:36.622487 systemd-tmpfiles[1733]: Skipping /boot Dec 13 01:53:36.644203 systemd-udevd[1734]: Using default interface naming scheme 'v255'. Dec 13 01:53:36.779162 zram_generator::config[1760]: No configuration found. Dec 13 01:53:36.832483 (udev-worker)[1780]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:36.884318 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1770) Dec 13 01:53:36.890207 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1770) Dec 13 01:53:37.163171 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1776) Dec 13 01:53:37.238028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:37.382757 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:53:37.384055 systemd[1]: Reloading finished in 835 ms. Dec 13 01:53:37.408067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:37.420236 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:37.445853 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:53:37.499761 systemd[1]: Finished ensure-sysext.service. Dec 13 01:53:37.506145 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:53:37.523388 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:37.530490 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:53:37.534207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:37.540467 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:53:37.555447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:53:37.561535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:53:37.569545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:53:37.576902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:53:37.583582 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:37.599226 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:53:37.606791 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:53:37.615098 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:53:37.623416 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:53:37.625492 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:53:37.633362 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:53:37.639449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:37.663975 lvm[1932]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:37.672937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:37.673352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:53:37.676981 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:37.677325 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:53:37.681733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:37.682100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:53:37.691777 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:37.709589 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:53:37.731861 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:37.733398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:53:37.742185 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:53:37.758704 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:53:37.802527 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:53:37.807485 augenrules[1965]: No rules Dec 13 01:53:37.809938 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:37.813089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:53:37.820740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:37.833452 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:53:37.836859 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:53:37.862564 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:53:37.886016 lvm[1974]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:37.892668 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:53:37.898469 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:53:37.916249 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:53:37.921593 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:53:37.933468 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:53:38.002202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:38.059633 systemd-resolved[1949]: Positive Trust Anchors: Dec 13 01:53:38.059673 systemd-resolved[1949]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:38.059737 systemd-resolved[1949]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:53:38.062567 systemd-networkd[1947]: lo: Link UP Dec 13 01:53:38.062588 systemd-networkd[1947]: lo: Gained carrier Dec 13 01:53:38.065719 systemd-networkd[1947]: Enumeration completed Dec 13 01:53:38.065911 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:53:38.071046 systemd-networkd[1947]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:38.071071 systemd-networkd[1947]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:38.073854 systemd-networkd[1947]: eth0: Link UP Dec 13 01:53:38.074233 systemd-resolved[1949]: Defaulting to hostname 'linux'. Dec 13 01:53:38.074510 systemd-networkd[1947]: eth0: Gained carrier Dec 13 01:53:38.074660 systemd-networkd[1947]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:38.076491 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:53:38.083817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:53:38.086082 systemd[1]: Reached target network.target - Network. Dec 13 01:53:38.087749 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:38.090658 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:53:38.092731 systemd-networkd[1947]: eth0: DHCPv4 address 172.31.19.221/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:53:38.094685 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:53:38.097162 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:53:38.099832 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:53:38.102491 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:53:38.104926 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:53:38.109237 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:53:38.109296 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:53:38.113174 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:53:38.120069 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:53:38.124947 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:53:38.135382 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:53:38.138586 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:53:38.141406 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:53:38.143945 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:53:38.145778 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:53:38.145825 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:53:38.152327 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:53:38.162741 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:53:38.170608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:53:38.178447 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:53:38.183594 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:53:38.186343 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:53:38.191581 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:53:38.199512 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:53:38.205933 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:53:38.213396 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:53:38.224042 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:53:38.231279 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:53:38.249545 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:53:38.259220 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:53:38.260081 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:53:38.263585 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:53:38.268497 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:53:38.301790 jq[1996]: false Dec 13 01:53:38.313898 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:53:38.314466 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:53:38.328288 jq[2007]: true Dec 13 01:53:38.353165 jq[2017]: true Dec 13 01:53:38.363153 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:53:38.364903 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:53:38.420294 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:53:38.435598 extend-filesystems[1997]: Found loop4 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found loop5 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found loop6 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found loop7 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found nvme0n1 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found nvme0n1p1 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found nvme0n1p2 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found nvme0n1p3 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found usr Dec 13 01:53:38.442333 extend-filesystems[1997]: Found nvme0n1p4 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found nvme0n1p6 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found nvme0n1p7 Dec 13 01:53:38.442333 extend-filesystems[1997]: Found nvme0n1p9 Dec 13 01:53:38.442333 extend-filesystems[1997]: Checking size of /dev/nvme0n1p9 Dec 13 01:53:38.463268 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:53:38.511453 tar[2009]: linux-arm64/helm Dec 13 01:53:38.462845 dbus-daemon[1995]: [system] SELinux support is enabled Dec 13 01:53:38.488637 dbus-daemon[1995]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1947 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:53:38.520448 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:53:38.520496 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:53:38.522984 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:53:38.523020 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:53:38.530299 dbus-daemon[1995]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:53:38.540616 extend-filesystems[1997]: Resized partition /dev/nvme0n1p9 Dec 13 01:53:38.541936 (ntainerd)[2026]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:53:38.549363 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:53:38.553330 extend-filesystems[2042]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:53:38.564320 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:53:38.572034 coreos-metadata[1994]: Dec 13 01:53:38.571 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:53:38.573861 coreos-metadata[1994]: Dec 13 01:53:38.573 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:53:38.578149 coreos-metadata[1994]: Dec 13 01:53:38.576 INFO Fetch successful Dec 13 01:53:38.578149 coreos-metadata[1994]: Dec 13 01:53:38.576 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:53:38.578149 coreos-metadata[1994]: Dec 13 01:53:38.577 INFO Fetch successful Dec 13 01:53:38.578149 coreos-metadata[1994]: Dec 13 01:53:38.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:53:38.585206 coreos-metadata[1994]: Dec 13 01:53:38.580 INFO Fetch successful Dec 13 01:53:38.585206 coreos-metadata[1994]: Dec 13 01:53:38.580 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:53:38.586002 coreos-metadata[1994]: Dec 13 01:53:38.585 INFO Fetch successful Dec 13 01:53:38.586002 coreos-metadata[1994]: Dec 13 01:53:38.585 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:53:38.587454 coreos-metadata[1994]: Dec 13 01:53:38.587 INFO Fetch failed with 404: resource not found Dec 13 01:53:38.587454 coreos-metadata[1994]: Dec 13 01:53:38.587 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:53:38.590281 update_engine[2005]: I20241213 01:53:38.589334 2005 main.cc:92] Flatcar Update Engine starting Dec 13 01:53:38.594453 coreos-metadata[1994]: Dec 13 01:53:38.590 INFO Fetch successful Dec 13 01:53:38.594453 coreos-metadata[1994]: Dec 13 01:53:38.594 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:53:38.596932 coreos-metadata[1994]: Dec 13 01:53:38.595 INFO Fetch successful Dec 13 01:53:38.596932 coreos-metadata[1994]: Dec 13 01:53:38.596 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:53:38.597250 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:53:38.597250 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:53:38.597250 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: ---------------------------------------------------- Dec 13 01:53:38.597250 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:53:38.597250 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:53:38.597250 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: corporation. Support and training for ntp-4 are Dec 13 01:53:38.597250 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: available at https://www.nwtime.org/support Dec 13 01:53:38.597250 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: ---------------------------------------------------- Dec 13 01:53:38.596742 ntpd[1999]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:53:38.604729 coreos-metadata[1994]: Dec 13 01:53:38.598 INFO Fetch successful Dec 13 01:53:38.604729 coreos-metadata[1994]: Dec 13 01:53:38.601 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:53:38.604729 coreos-metadata[1994]: Dec 13 01:53:38.602 INFO Fetch successful Dec 13 01:53:38.604729 coreos-metadata[1994]: Dec 13 01:53:38.602 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:53:38.596797 ntpd[1999]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:53:38.596818 ntpd[1999]: ---------------------------------------------------- Dec 13 01:53:38.615479 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: proto: precision = 0.108 usec (-23) Dec 13 01:53:38.615479 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: basedate set to 2024-11-30 Dec 13 01:53:38.615479 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: gps base set to 2024-12-01 (week 2343) Dec 13 01:53:38.605166 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:53:38.596837 ntpd[1999]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:53:38.624834 update_engine[2005]: I20241213 01:53:38.622653 2005 update_check_scheduler.cc:74] Next update check in 7m27s Dec 13 01:53:38.624895 coreos-metadata[1994]: Dec 13 01:53:38.618 INFO Fetch successful Dec 13 01:53:38.596857 ntpd[1999]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:53:38.596876 ntpd[1999]: corporation. Support and training for ntp-4 are Dec 13 01:53:38.596894 ntpd[1999]: available at https://www.nwtime.org/support Dec 13 01:53:38.596912 ntpd[1999]: ---------------------------------------------------- Dec 13 01:53:38.614370 ntpd[1999]: proto: precision = 0.108 usec (-23) Dec 13 01:53:38.614815 ntpd[1999]: basedate set to 2024-11-30 Dec 13 01:53:38.614841 ntpd[1999]: gps base set to 2024-12-01 (week 2343) Dec 13 01:53:38.629656 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:53:38.632540 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:53:38.632885 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:53:38.647547 ntpd[1999]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:53:38.650518 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:53:38.647660 ntpd[1999]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:53:38.653678 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:53:38.665829 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:53:38.665829 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: Listen normally on 3 eth0 172.31.19.221:123 Dec 13 01:53:38.665829 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: Listen normally on 4 lo [::1]:123 Dec 13 01:53:38.665829 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: bind(21) AF_INET6 fe80::447:2bff:feb3:4a01%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:53:38.665829 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: unable to create socket on eth0 (5) for fe80::447:2bff:feb3:4a01%2#123 Dec 13 01:53:38.665829 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: failed to init interface for address fe80::447:2bff:feb3:4a01%2 Dec 13 01:53:38.665829 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: Listening on routing socket on fd #21 for interface updates Dec 13 01:53:38.656788 ntpd[1999]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:53:38.656907 ntpd[1999]: Listen normally on 3 eth0 172.31.19.221:123 Dec 13 01:53:38.657663 ntpd[1999]: Listen normally on 4 lo [::1]:123 Dec 13 01:53:38.657816 ntpd[1999]: bind(21) AF_INET6 fe80::447:2bff:feb3:4a01%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:53:38.660689 ntpd[1999]: unable to create socket on eth0 (5) for fe80::447:2bff:feb3:4a01%2#123 Dec 13 01:53:38.660730 ntpd[1999]: failed to init interface for address fe80::447:2bff:feb3:4a01%2 Dec 13 01:53:38.660808 ntpd[1999]: Listening on routing socket on fd #21 for interface updates Dec 13 01:53:38.695144 ntpd[1999]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:38.695241 ntpd[1999]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:38.717067 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:38.717067 ntpd[1999]: 13 Dec 01:53:38 ntpd[1999]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:38.739201 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:53:38.757485 extend-filesystems[2042]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:53:38.757485 extend-filesystems[2042]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:53:38.757485 extend-filesystems[2042]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:53:38.776339 extend-filesystems[1997]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:53:38.779744 bash[2062]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:53:38.766285 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:53:38.767296 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:53:38.780761 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:53:38.793476 systemd[1]: Starting sshkeys.service... Dec 13 01:53:38.796329 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:53:38.800463 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:53:38.806446 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:53:38.853041 systemd-logind[2004]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:53:38.861275 systemd-logind[2004]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:53:38.866354 systemd-logind[2004]: New seat seat0. Dec 13 01:53:38.874034 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:53:38.932994 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:53:38.944899 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:53:38.973154 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1780) Dec 13 01:53:39.178031 locksmithd[2051]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:53:39.217768 dbus-daemon[1995]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:53:39.218023 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:53:39.224097 dbus-daemon[1995]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2043 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:53:39.264355 containerd[2026]: time="2024-12-13T01:53:39.264181280Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:53:39.281252 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:53:39.399516 polkitd[2157]: Started polkitd version 121 Dec 13 01:53:39.434604 coreos-metadata[2087]: Dec 13 01:53:39.434 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:53:39.440712 coreos-metadata[2087]: Dec 13 01:53:39.435 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:53:39.440712 coreos-metadata[2087]: Dec 13 01:53:39.436 INFO Fetch successful Dec 13 01:53:39.440712 coreos-metadata[2087]: Dec 13 01:53:39.436 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:53:39.440712 coreos-metadata[2087]: Dec 13 01:53:39.438 INFO Fetch successful Dec 13 01:53:39.442604 unknown[2087]: wrote ssh authorized keys file for user: core Dec 13 01:53:39.453875 systemd-networkd[1947]: eth0: Gained IPv6LL Dec 13 01:53:39.473089 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:53:39.476681 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:53:39.505340 polkitd[2157]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:53:39.505466 polkitd[2157]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:53:39.507802 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:53:39.510835 polkitd[2157]: Finished loading, compiling and executing 2 rules Dec 13 01:53:39.528578 dbus-daemon[1995]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:53:39.530951 polkitd[2157]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:53:39.536018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:53:39.541942 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:53:39.545001 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:53:39.573362 update-ssh-keys[2185]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:53:39.574948 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:53:39.586176 systemd[1]: Finished sshkeys.service. Dec 13 01:53:39.602193 containerd[2026]: time="2024-12-13T01:53:39.583561257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:39.632237 containerd[2026]: time="2024-12-13T01:53:39.631780414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:39.632237 containerd[2026]: time="2024-12-13T01:53:39.631848322Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:53:39.632237 containerd[2026]: time="2024-12-13T01:53:39.631884322Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:53:39.634141 containerd[2026]: time="2024-12-13T01:53:39.632508850Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:53:39.634141 containerd[2026]: time="2024-12-13T01:53:39.632573146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:39.634141 containerd[2026]: time="2024-12-13T01:53:39.632708290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:39.634141 containerd[2026]: time="2024-12-13T01:53:39.632739730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:39.635137 containerd[2026]: time="2024-12-13T01:53:39.633092422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:39.635349 containerd[2026]: time="2024-12-13T01:53:39.635297650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:39.641575 containerd[2026]: time="2024-12-13T01:53:39.635728738Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:39.641575 containerd[2026]: time="2024-12-13T01:53:39.640238290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:39.641575 containerd[2026]: time="2024-12-13T01:53:39.640496038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:39.641575 containerd[2026]: time="2024-12-13T01:53:39.640900558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:39.641575 containerd[2026]: time="2024-12-13T01:53:39.641145550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:39.641575 containerd[2026]: time="2024-12-13T01:53:39.641182966Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:53:39.641575 containerd[2026]: time="2024-12-13T01:53:39.641354818Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:53:39.641575 containerd[2026]: time="2024-12-13T01:53:39.641449858Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.658252510Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.658359958Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.658399330Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.658484158Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.658521046Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.658783846Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.659227942Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.659405878Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.659440042Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.659472298Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.659504482Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.659534050Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.659563438Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:53:39.660903 containerd[2026]: time="2024-12-13T01:53:39.659594554Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:53:39.661126 systemd-hostnamed[2043]: Hostname set to (transient) Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659636746Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659672578Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659702050Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659728534Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659769838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659800462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659829214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659859946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659888806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659919310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659947402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.659976490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.660006802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662184 containerd[2026]: time="2024-12-13T01:53:39.660041590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.662814 containerd[2026]: time="2024-12-13T01:53:39.660070306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.665399 systemd-resolved[1949]: System hostname changed to 'ip-172-31-19-221'. Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.660104590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.668220394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.668268058Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.668330926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.668361814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.668394358Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.668689738Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.668855542Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.668888098Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:53:39.669489 containerd[2026]: time="2024-12-13T01:53:39.668918734Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:53:39.673657 containerd[2026]: time="2024-12-13T01:53:39.668947810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.673657 containerd[2026]: time="2024-12-13T01:53:39.670559950Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:53:39.673657 containerd[2026]: time="2024-12-13T01:53:39.670598830Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:53:39.673657 containerd[2026]: time="2024-12-13T01:53:39.670628458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:53:39.673919 containerd[2026]: time="2024-12-13T01:53:39.672347938Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:53:39.673919 containerd[2026]: time="2024-12-13T01:53:39.672494878Z" level=info msg="Connect containerd service" Dec 13 01:53:39.673919 containerd[2026]: time="2024-12-13T01:53:39.672560902Z" level=info msg="using legacy CRI server" Dec 13 01:53:39.673919 containerd[2026]: time="2024-12-13T01:53:39.672578866Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:53:39.673919 containerd[2026]: time="2024-12-13T01:53:39.672727942Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:53:39.676893 containerd[2026]: time="2024-12-13T01:53:39.676697038Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:53:39.678425 containerd[2026]: time="2024-12-13T01:53:39.677184142Z" level=info msg="Start subscribing containerd event" Dec 13 01:53:39.678425 containerd[2026]: time="2024-12-13T01:53:39.677266234Z" level=info msg="Start recovering state" Dec 13 01:53:39.678425 containerd[2026]: time="2024-12-13T01:53:39.677390386Z" level=info msg="Start event monitor" Dec 13 01:53:39.678425 containerd[2026]: time="2024-12-13T01:53:39.677415250Z" level=info msg="Start snapshots syncer" Dec 13 01:53:39.678425 containerd[2026]: time="2024-12-13T01:53:39.677448298Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:53:39.678425 containerd[2026]: time="2024-12-13T01:53:39.677468002Z" level=info msg="Start streaming server" Dec 13 01:53:39.681601 containerd[2026]: time="2024-12-13T01:53:39.681548290Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:53:39.690389 containerd[2026]: time="2024-12-13T01:53:39.684304822Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:53:39.690389 containerd[2026]: time="2024-12-13T01:53:39.684472786Z" level=info msg="containerd successfully booted in 0.421785s" Dec 13 01:53:39.684684 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:53:39.736731 amazon-ssm-agent[2188]: Initializing new seelog logger Dec 13 01:53:39.737386 amazon-ssm-agent[2188]: New Seelog Logger Creation Complete Dec 13 01:53:39.737386 amazon-ssm-agent[2188]: 2024/12/13 01:53:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:39.737386 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:39.741208 amazon-ssm-agent[2188]: 2024/12/13 01:53:39 processing appconfig overrides Dec 13 01:53:39.741208 amazon-ssm-agent[2188]: 2024/12/13 01:53:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:39.741208 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:39.741208 amazon-ssm-agent[2188]: 2024/12/13 01:53:39 processing appconfig overrides Dec 13 01:53:39.741208 amazon-ssm-agent[2188]: 2024/12/13 01:53:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:39.741208 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:39.741208 amazon-ssm-agent[2188]: 2024/12/13 01:53:39 processing appconfig overrides Dec 13 01:53:39.743928 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO Proxy environment variables: Dec 13 01:53:39.750941 amazon-ssm-agent[2188]: 2024/12/13 01:53:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:39.751169 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:39.751596 amazon-ssm-agent[2188]: 2024/12/13 01:53:39 processing appconfig overrides Dec 13 01:53:39.765257 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:53:39.852367 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO https_proxy: Dec 13 01:53:39.953192 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO http_proxy: Dec 13 01:53:40.049279 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO no_proxy: Dec 13 01:53:40.149340 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:53:40.247783 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:53:40.347247 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO Agent will take identity from EC2 Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO [Registrar] Starting registrar module Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:39 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:40 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:40 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:40 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:53:40.446885 amazon-ssm-agent[2188]: 2024-12-13 01:53:40 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:53:40.449510 amazon-ssm-agent[2188]: 2024-12-13 01:53:40 INFO [CredentialRefresher] Next credential rotation will be in 32.0249448528 minutes Dec 13 01:53:40.493214 tar[2009]: linux-arm64/LICENSE Dec 13 01:53:40.496891 tar[2009]: linux-arm64/README.md Dec 13 01:53:40.517234 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:53:40.616740 sshd_keygen[2021]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:53:40.659826 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:53:40.671670 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:53:40.679824 systemd[1]: Started sshd@0-172.31.19.221:22-139.178.68.195:39214.service - OpenSSH per-connection server daemon (139.178.68.195:39214). Dec 13 01:53:40.695871 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:53:40.697900 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:53:40.715380 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:53:40.743669 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:53:40.756306 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:53:40.771684 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:53:40.774205 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:53:40.891687 sshd[2232]: Accepted publickey for core from 139.178.68.195 port 39214 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:40.897419 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:40.916496 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:53:40.930642 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:53:40.939463 systemd-logind[2004]: New session 1 of user core. Dec 13 01:53:40.961036 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:53:40.973776 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:53:40.992268 (systemd)[2243]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:41.216392 systemd[2243]: Queued start job for default target default.target. Dec 13 01:53:41.228533 systemd[2243]: Created slice app.slice - User Application Slice. Dec 13 01:53:41.228607 systemd[2243]: Reached target paths.target - Paths. Dec 13 01:53:41.228641 systemd[2243]: Reached target timers.target - Timers. Dec 13 01:53:41.231503 systemd[2243]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:53:41.269689 systemd[2243]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:53:41.269822 systemd[2243]: Reached target sockets.target - Sockets. Dec 13 01:53:41.269855 systemd[2243]: Reached target basic.target - Basic System. Dec 13 01:53:41.269941 systemd[2243]: Reached target default.target - Main User Target. Dec 13 01:53:41.270004 systemd[2243]: Startup finished in 264ms. Dec 13 01:53:41.270727 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:53:41.283409 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:53:41.443324 systemd[1]: Started sshd@1-172.31.19.221:22-139.178.68.195:39216.service - OpenSSH per-connection server daemon (139.178.68.195:39216). Dec 13 01:53:41.495924 amazon-ssm-agent[2188]: 2024-12-13 01:53:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:53:41.596315 amazon-ssm-agent[2188]: 2024-12-13 01:53:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2257) started Dec 13 01:53:41.603625 ntpd[1999]: Listen normally on 6 eth0 [fe80::447:2bff:feb3:4a01%2]:123 Dec 13 01:53:41.604601 ntpd[1999]: 13 Dec 01:53:41 ntpd[1999]: Listen normally on 6 eth0 [fe80::447:2bff:feb3:4a01%2]:123 Dec 13 01:53:41.642438 sshd[2254]: Accepted publickey for core from 139.178.68.195 port 39216 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:41.646003 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:41.660829 systemd-logind[2004]: New session 2 of user core. Dec 13 01:53:41.670443 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:53:41.697190 amazon-ssm-agent[2188]: 2024-12-13 01:53:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:53:41.812399 sshd[2254]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:41.819591 systemd[1]: sshd@1-172.31.19.221:22-139.178.68.195:39216.service: Deactivated successfully. Dec 13 01:53:41.826237 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:53:41.829732 systemd-logind[2004]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:53:41.847497 systemd-logind[2004]: Removed session 2. Dec 13 01:53:41.858805 systemd[1]: Started sshd@2-172.31.19.221:22-139.178.68.195:39224.service - OpenSSH per-connection server daemon (139.178.68.195:39224). Dec 13 01:53:41.935408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:53:41.938659 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:53:41.941761 systemd[1]: Startup finished in 1.180s (kernel) + 7.526s (initrd) + 8.660s (userspace) = 17.367s. Dec 13 01:53:41.947052 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:53:42.043394 sshd[2271]: Accepted publickey for core from 139.178.68.195 port 39224 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:42.046069 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:42.056470 systemd-logind[2004]: New session 3 of user core. Dec 13 01:53:42.064612 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:53:42.193098 sshd[2271]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:42.201556 systemd[1]: sshd@2-172.31.19.221:22-139.178.68.195:39224.service: Deactivated successfully. Dec 13 01:53:42.205229 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:53:42.207516 systemd-logind[2004]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:53:42.209792 systemd-logind[2004]: Removed session 3. Dec 13 01:53:42.929183 kubelet[2278]: E1213 01:53:42.928883 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:42.933650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:42.934008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:42.934746 systemd[1]: kubelet.service: Consumed 1.278s CPU time. Dec 13 01:53:45.738176 systemd-resolved[1949]: Clock change detected. Flushing caches. Dec 13 01:53:52.362777 systemd[1]: Started sshd@3-172.31.19.221:22-139.178.68.195:57122.service - OpenSSH per-connection server daemon (139.178.68.195:57122). Dec 13 01:53:52.544236 sshd[2296]: Accepted publickey for core from 139.178.68.195 port 57122 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:52.547090 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:52.556804 systemd-logind[2004]: New session 4 of user core. Dec 13 01:53:52.560840 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:53:52.685480 sshd[2296]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:52.691431 systemd-logind[2004]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:53:52.692208 systemd[1]: sshd@3-172.31.19.221:22-139.178.68.195:57122.service: Deactivated successfully. Dec 13 01:53:52.695656 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:53:52.698910 systemd-logind[2004]: Removed session 4. Dec 13 01:53:52.731110 systemd[1]: Started sshd@4-172.31.19.221:22-139.178.68.195:57132.service - OpenSSH per-connection server daemon (139.178.68.195:57132). Dec 13 01:53:52.898874 sshd[2303]: Accepted publickey for core from 139.178.68.195 port 57132 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:52.901512 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:52.910635 systemd-logind[2004]: New session 5 of user core. Dec 13 01:53:52.919774 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:53:53.038836 sshd[2303]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:53.045574 systemd[1]: sshd@4-172.31.19.221:22-139.178.68.195:57132.service: Deactivated successfully. Dec 13 01:53:53.049137 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:53:53.050571 systemd-logind[2004]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:53:53.052415 systemd-logind[2004]: Removed session 5. Dec 13 01:53:53.069027 systemd[1]: Started sshd@5-172.31.19.221:22-139.178.68.195:57138.service - OpenSSH per-connection server daemon (139.178.68.195:57138). Dec 13 01:53:53.070348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:53:53.076147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:53:53.256609 sshd[2310]: Accepted publickey for core from 139.178.68.195 port 57138 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:53.258094 sshd[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:53.271633 systemd-logind[2004]: New session 6 of user core. Dec 13 01:53:53.279844 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:53:53.403673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:53:53.413954 sshd[2310]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:53.417414 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:53:53.420178 systemd[1]: sshd@5-172.31.19.221:22-139.178.68.195:57138.service: Deactivated successfully. Dec 13 01:53:53.426463 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:53:53.429753 systemd-logind[2004]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:53:53.433090 systemd-logind[2004]: Removed session 6. Dec 13 01:53:53.457129 systemd[1]: Started sshd@6-172.31.19.221:22-139.178.68.195:57154.service - OpenSSH per-connection server daemon (139.178.68.195:57154). Dec 13 01:53:53.515055 kubelet[2322]: E1213 01:53:53.514962 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:53.522404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:53.522792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:53.641427 sshd[2330]: Accepted publickey for core from 139.178.68.195 port 57154 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:53.644031 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:53.651585 systemd-logind[2004]: New session 7 of user core. Dec 13 01:53:53.662770 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:53:53.780206 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:53:53.780927 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:53.796365 sudo[2336]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:53.819766 sshd[2330]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:53.825264 systemd[1]: sshd@6-172.31.19.221:22-139.178.68.195:57154.service: Deactivated successfully. Dec 13 01:53:53.828349 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:53:53.831702 systemd-logind[2004]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:53:53.833988 systemd-logind[2004]: Removed session 7. Dec 13 01:53:53.862078 systemd[1]: Started sshd@7-172.31.19.221:22-139.178.68.195:57166.service - OpenSSH per-connection server daemon (139.178.68.195:57166). Dec 13 01:53:54.030241 sshd[2341]: Accepted publickey for core from 139.178.68.195 port 57166 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:54.033460 sshd[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:54.042691 systemd-logind[2004]: New session 8 of user core. Dec 13 01:53:54.047850 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:53:54.152026 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:53:54.153189 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:54.159794 sudo[2345]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:54.169764 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:53:54.170375 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:54.193140 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:54.213676 auditctl[2348]: No rules Dec 13 01:53:54.214598 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:53:54.216622 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:54.226496 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:54.275480 augenrules[2366]: No rules Dec 13 01:53:54.278671 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:54.281088 sudo[2344]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:54.306853 sshd[2341]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:54.312058 systemd[1]: sshd@7-172.31.19.221:22-139.178.68.195:57166.service: Deactivated successfully. Dec 13 01:53:54.315053 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:53:54.318624 systemd-logind[2004]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:53:54.320461 systemd-logind[2004]: Removed session 8. Dec 13 01:53:54.346223 systemd[1]: Started sshd@8-172.31.19.221:22-139.178.68.195:57182.service - OpenSSH per-connection server daemon (139.178.68.195:57182). Dec 13 01:53:54.527472 sshd[2374]: Accepted publickey for core from 139.178.68.195 port 57182 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:54.530199 sshd[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:54.539899 systemd-logind[2004]: New session 9 of user core. Dec 13 01:53:54.543842 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:53:54.650196 sudo[2377]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:53:54.650841 sudo[2377]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:55.094339 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:53:55.094935 (dockerd)[2393]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:53:55.464810 dockerd[2393]: time="2024-12-13T01:53:55.463723724Z" level=info msg="Starting up" Dec 13 01:53:55.599002 dockerd[2393]: time="2024-12-13T01:53:55.598911716Z" level=info msg="Loading containers: start." Dec 13 01:53:55.762560 kernel: Initializing XFRM netlink socket Dec 13 01:53:55.797931 (udev-worker)[2416]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:55.895316 systemd-networkd[1947]: docker0: Link UP Dec 13 01:53:55.921959 dockerd[2393]: time="2024-12-13T01:53:55.921889222Z" level=info msg="Loading containers: done." Dec 13 01:53:55.945448 dockerd[2393]: time="2024-12-13T01:53:55.945368506Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:53:55.945813 dockerd[2393]: time="2024-12-13T01:53:55.945716182Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:53:55.945995 dockerd[2393]: time="2024-12-13T01:53:55.945954586Z" level=info msg="Daemon has completed initialization" Dec 13 01:53:56.007854 dockerd[2393]: time="2024-12-13T01:53:56.006752359Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:53:56.007771 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:53:57.115025 containerd[2026]: time="2024-12-13T01:53:57.114947720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:53:57.805911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883827810.mount: Deactivated successfully. Dec 13 01:53:59.640577 containerd[2026]: time="2024-12-13T01:53:59.639201313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:59.642020 containerd[2026]: time="2024-12-13T01:53:59.641972689Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615585" Dec 13 01:53:59.642926 containerd[2026]: time="2024-12-13T01:53:59.642881449Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:59.648941 containerd[2026]: time="2024-12-13T01:53:59.648882421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:59.651555 containerd[2026]: time="2024-12-13T01:53:59.651456721Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.536434373s" Dec 13 01:53:59.651832 containerd[2026]: time="2024-12-13T01:53:59.651794617Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Dec 13 01:53:59.652771 containerd[2026]: time="2024-12-13T01:53:59.652716397Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:54:01.568880 containerd[2026]: time="2024-12-13T01:54:01.568687370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:01.571189 containerd[2026]: time="2024-12-13T01:54:01.571111718Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470096" Dec 13 01:54:01.572780 containerd[2026]: time="2024-12-13T01:54:01.572705522Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:01.578480 containerd[2026]: time="2024-12-13T01:54:01.578371946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:01.581104 containerd[2026]: time="2024-12-13T01:54:01.580823990Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.927870533s" Dec 13 01:54:01.581104 containerd[2026]: time="2024-12-13T01:54:01.580886078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Dec 13 01:54:01.581892 containerd[2026]: time="2024-12-13T01:54:01.581832698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:54:03.210977 containerd[2026]: time="2024-12-13T01:54:03.210739142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:03.212269 containerd[2026]: time="2024-12-13T01:54:03.212181026Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024202" Dec 13 01:54:03.213285 containerd[2026]: time="2024-12-13T01:54:03.213225650Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:03.220872 containerd[2026]: time="2024-12-13T01:54:03.220787210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:03.223138 containerd[2026]: time="2024-12-13T01:54:03.222961118Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.640934464s" Dec 13 01:54:03.223138 containerd[2026]: time="2024-12-13T01:54:03.223015526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Dec 13 01:54:03.224060 containerd[2026]: time="2024-12-13T01:54:03.223984010Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:54:03.773714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:54:03.781219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:04.163906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:04.176092 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:54:04.282758 kubelet[2608]: E1213 01:54:04.282439 2608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:54:04.288998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:54:04.289348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:54:04.697289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258112295.mount: Deactivated successfully. Dec 13 01:54:05.233832 containerd[2026]: time="2024-12-13T01:54:05.233759068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:05.235849 containerd[2026]: time="2024-12-13T01:54:05.235773676Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771426" Dec 13 01:54:05.236830 containerd[2026]: time="2024-12-13T01:54:05.236786416Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:05.240777 containerd[2026]: time="2024-12-13T01:54:05.240721684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:05.242418 containerd[2026]: time="2024-12-13T01:54:05.242349220Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 2.018300938s" Dec 13 01:54:05.242578 containerd[2026]: time="2024-12-13T01:54:05.242412796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 01:54:05.243320 containerd[2026]: time="2024-12-13T01:54:05.243282148Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:54:05.802699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554336640.mount: Deactivated successfully. Dec 13 01:54:06.954925 containerd[2026]: time="2024-12-13T01:54:06.954839397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:06.957195 containerd[2026]: time="2024-12-13T01:54:06.957133029Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:54:06.959294 containerd[2026]: time="2024-12-13T01:54:06.959205669Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:06.967686 containerd[2026]: time="2024-12-13T01:54:06.967607397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:06.970295 containerd[2026]: time="2024-12-13T01:54:06.970086861Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.726221045s" Dec 13 01:54:06.970295 containerd[2026]: time="2024-12-13T01:54:06.970146765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:54:06.971401 containerd[2026]: time="2024-12-13T01:54:06.970806597Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:54:07.598484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3045788364.mount: Deactivated successfully. Dec 13 01:54:07.613656 containerd[2026]: time="2024-12-13T01:54:07.612730196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:07.615080 containerd[2026]: time="2024-12-13T01:54:07.614723012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 13 01:54:07.617387 containerd[2026]: time="2024-12-13T01:54:07.617293004Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:07.622546 containerd[2026]: time="2024-12-13T01:54:07.622426592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:07.624264 containerd[2026]: time="2024-12-13T01:54:07.624057980Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 653.203143ms" Dec 13 01:54:07.624264 containerd[2026]: time="2024-12-13T01:54:07.624118148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 01:54:07.625190 containerd[2026]: time="2024-12-13T01:54:07.624708524Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:54:08.229894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount259071256.mount: Deactivated successfully. Dec 13 01:54:09.809499 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:54:12.097607 containerd[2026]: time="2024-12-13T01:54:12.096994582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:12.100127 containerd[2026]: time="2024-12-13T01:54:12.099670918Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Dec 13 01:54:12.102341 containerd[2026]: time="2024-12-13T01:54:12.102268642Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:12.111356 containerd[2026]: time="2024-12-13T01:54:12.111255478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:12.114338 containerd[2026]: time="2024-12-13T01:54:12.114007811Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.489250051s" Dec 13 01:54:12.114338 containerd[2026]: time="2024-12-13T01:54:12.114097295Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Dec 13 01:54:14.540616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:54:14.552688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:14.882017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:14.884815 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:54:14.956483 kubelet[2753]: E1213 01:54:14.956398 2753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:54:14.961692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:54:14.962009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:54:18.793439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:18.802082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:18.876885 systemd[1]: Reloading requested from client PID 2767 ('systemctl') (unit session-9.scope)... Dec 13 01:54:18.877186 systemd[1]: Reloading... Dec 13 01:54:19.110565 zram_generator::config[2810]: No configuration found. Dec 13 01:54:19.345330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:19.515639 systemd[1]: Reloading finished in 637 ms. Dec 13 01:54:19.607008 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:54:19.607201 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:54:19.607860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:19.620828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:19.908674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:19.924418 (kubelet)[2870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:54:20.002275 kubelet[2870]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:20.002275 kubelet[2870]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:54:20.002275 kubelet[2870]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:20.002889 kubelet[2870]: I1213 01:54:20.002416 2870 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:54:20.885550 kubelet[2870]: I1213 01:54:20.884991 2870 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:54:20.885550 kubelet[2870]: I1213 01:54:20.885045 2870 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:54:20.885869 kubelet[2870]: I1213 01:54:20.885823 2870 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:54:20.938040 kubelet[2870]: E1213 01:54:20.937973 2870 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.221:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.221:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:54:20.939828 kubelet[2870]: I1213 01:54:20.939777 2870 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:20.953131 kubelet[2870]: E1213 01:54:20.953073 2870 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:54:20.953464 kubelet[2870]: I1213 01:54:20.953332 2870 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:54:20.960403 kubelet[2870]: I1213 01:54:20.960345 2870 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:54:20.960707 kubelet[2870]: I1213 01:54:20.960677 2870 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:54:20.961040 kubelet[2870]: I1213 01:54:20.960972 2870 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:54:20.961333 kubelet[2870]: I1213 01:54:20.961034 2870 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-221","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:54:20.961497 kubelet[2870]: I1213 01:54:20.961345 2870 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:54:20.961497 kubelet[2870]: I1213 01:54:20.961366 2870 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:54:20.961664 kubelet[2870]: I1213 01:54:20.961607 2870 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:20.965351 kubelet[2870]: I1213 01:54:20.965252 2870 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:54:20.965351 kubelet[2870]: I1213 01:54:20.965299 2870 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:54:20.966562 kubelet[2870]: I1213 01:54:20.966281 2870 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:54:20.966562 kubelet[2870]: I1213 01:54:20.966315 2870 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:54:20.976165 kubelet[2870]: W1213 01:54:20.975257 2870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-221&limit=500&resourceVersion=0": dial tcp 172.31.19.221:6443: connect: connection refused Dec 13 01:54:20.976165 kubelet[2870]: E1213 01:54:20.975351 2870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-221&limit=500&resourceVersion=0\": dial tcp 172.31.19.221:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:54:20.977582 kubelet[2870]: W1213 01:54:20.977463 2870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.221:6443: connect: connection refused Dec 13 01:54:20.977742 kubelet[2870]: E1213 01:54:20.977591 2870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.221:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:54:20.977822 kubelet[2870]: I1213 01:54:20.977779 2870 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:54:20.981117 kubelet[2870]: I1213 01:54:20.980922 2870 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:54:20.982575 kubelet[2870]: W1213 01:54:20.982217 2870 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:54:20.983916 kubelet[2870]: I1213 01:54:20.983873 2870 server.go:1269] "Started kubelet" Dec 13 01:54:20.985287 kubelet[2870]: I1213 01:54:20.985230 2870 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:54:20.987278 kubelet[2870]: I1213 01:54:20.987242 2870 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:54:20.992912 kubelet[2870]: I1213 01:54:20.991351 2870 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:54:20.992912 kubelet[2870]: I1213 01:54:20.991841 2870 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:54:20.995578 kubelet[2870]: E1213 01:54:20.992686 2870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.221:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.221:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-221.181099b569fb193b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-221,UID:ip-172-31-19-221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-221,},FirstTimestamp:2024-12-13 01:54:20.983834939 +0000 UTC m=+1.052054827,LastTimestamp:2024-12-13 01:54:20.983834939 +0000 UTC m=+1.052054827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-221,}" Dec 13 01:54:20.998848 kubelet[2870]: I1213 01:54:20.998060 2870 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:54:21.002903 kubelet[2870]: I1213 01:54:21.002847 2870 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:54:21.003721 kubelet[2870]: I1213 01:54:21.003673 2870 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:54:21.004241 kubelet[2870]: E1213 01:54:21.004190 2870 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-221\" not found" Dec 13 01:54:21.005847 kubelet[2870]: I1213 01:54:21.004809 2870 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:54:21.005847 kubelet[2870]: I1213 01:54:21.004948 2870 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:54:21.011099 kubelet[2870]: W1213 01:54:21.010993 2870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.221:6443: connect: connection refused Dec 13 01:54:21.011293 kubelet[2870]: E1213 01:54:21.011112 2870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.221:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:54:21.011293 kubelet[2870]: E1213 01:54:21.011242 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-221?timeout=10s\": dial tcp 172.31.19.221:6443: connect: connection refused" interval="200ms" Dec 13 01:54:21.012182 kubelet[2870]: I1213 01:54:21.012113 2870 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:54:21.012728 kubelet[2870]: I1213 01:54:21.012670 2870 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:54:21.016171 kubelet[2870]: I1213 01:54:21.016104 2870 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:54:21.034827 kubelet[2870]: I1213 01:54:21.034747 2870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:54:21.037569 kubelet[2870]: I1213 01:54:21.037331 2870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:54:21.037569 kubelet[2870]: I1213 01:54:21.037378 2870 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:54:21.037569 kubelet[2870]: I1213 01:54:21.037411 2870 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:54:21.037569 kubelet[2870]: E1213 01:54:21.037487 2870 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:54:21.051121 kubelet[2870]: W1213 01:54:21.051035 2870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.221:6443: connect: connection refused Dec 13 01:54:21.051388 kubelet[2870]: E1213 01:54:21.051133 2870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.221:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:54:21.055589 kubelet[2870]: E1213 01:54:21.055419 2870 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:54:21.064318 kubelet[2870]: I1213 01:54:21.064278 2870 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:54:21.064318 kubelet[2870]: I1213 01:54:21.064310 2870 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:54:21.064582 kubelet[2870]: I1213 01:54:21.064343 2870 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:21.069890 kubelet[2870]: I1213 01:54:21.069837 2870 policy_none.go:49] "None policy: Start" Dec 13 01:54:21.071487 kubelet[2870]: I1213 01:54:21.070868 2870 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:54:21.071487 kubelet[2870]: I1213 01:54:21.070916 2870 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:54:21.082829 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:54:21.102803 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:54:21.104764 kubelet[2870]: E1213 01:54:21.104701 2870 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-221\" not found" Dec 13 01:54:21.110879 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:54:21.121659 kubelet[2870]: I1213 01:54:21.121620 2870 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:54:21.123491 kubelet[2870]: I1213 01:54:21.122206 2870 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:54:21.123491 kubelet[2870]: I1213 01:54:21.122235 2870 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:54:21.124372 kubelet[2870]: I1213 01:54:21.124121 2870 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:54:21.127193 kubelet[2870]: E1213 01:54:21.127052 2870 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-221\" not found" Dec 13 01:54:21.157327 systemd[1]: Created slice kubepods-burstable-pod6bcc0ea00c76c6db94879a80b4d2765a.slice - libcontainer container kubepods-burstable-pod6bcc0ea00c76c6db94879a80b4d2765a.slice. Dec 13 01:54:21.182477 systemd[1]: Created slice kubepods-burstable-podccd46e1cd19bca945df9aecbfcf83851.slice - libcontainer container kubepods-burstable-podccd46e1cd19bca945df9aecbfcf83851.slice. Dec 13 01:54:21.192134 systemd[1]: Created slice kubepods-burstable-pod1e84ef2d1688e72fbc5fe0cd1a18d37e.slice - libcontainer container kubepods-burstable-pod1e84ef2d1688e72fbc5fe0cd1a18d37e.slice. Dec 13 01:54:21.206248 kubelet[2870]: I1213 01:54:21.206194 2870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bcc0ea00c76c6db94879a80b4d2765a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-221\" (UID: \"6bcc0ea00c76c6db94879a80b4d2765a\") " pod="kube-system/kube-apiserver-ip-172-31-19-221" Dec 13 01:54:21.206821 kubelet[2870]: I1213 01:54:21.206438 2870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:21.206821 kubelet[2870]: I1213 01:54:21.206499 2870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:21.206821 kubelet[2870]: I1213 01:54:21.206568 2870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:21.206821 kubelet[2870]: I1213 01:54:21.206611 2870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:21.206821 kubelet[2870]: I1213 01:54:21.206652 2870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e84ef2d1688e72fbc5fe0cd1a18d37e-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-221\" (UID: \"1e84ef2d1688e72fbc5fe0cd1a18d37e\") " pod="kube-system/kube-scheduler-ip-172-31-19-221" Dec 13 01:54:21.207132 kubelet[2870]: I1213 01:54:21.206689 2870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bcc0ea00c76c6db94879a80b4d2765a-ca-certs\") pod \"kube-apiserver-ip-172-31-19-221\" (UID: \"6bcc0ea00c76c6db94879a80b4d2765a\") " pod="kube-system/kube-apiserver-ip-172-31-19-221" Dec 13 01:54:21.207132 kubelet[2870]: I1213 01:54:21.206723 2870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bcc0ea00c76c6db94879a80b4d2765a-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-221\" (UID: \"6bcc0ea00c76c6db94879a80b4d2765a\") " pod="kube-system/kube-apiserver-ip-172-31-19-221" Dec 13 01:54:21.207132 kubelet[2870]: I1213 01:54:21.206762 2870 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:21.212723 kubelet[2870]: E1213 01:54:21.212671 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-221?timeout=10s\": dial tcp 172.31.19.221:6443: connect: connection refused" interval="400ms" Dec 13 01:54:21.224545 kubelet[2870]: I1213 01:54:21.224469 2870 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-221" Dec 13 01:54:21.225141 kubelet[2870]: E1213 01:54:21.225050 2870 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.221:6443/api/v1/nodes\": dial tcp 172.31.19.221:6443: connect: connection refused" node="ip-172-31-19-221" Dec 13 01:54:21.428470 kubelet[2870]: I1213 01:54:21.428276 2870 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-221" Dec 13 01:54:21.428892 kubelet[2870]: E1213 01:54:21.428817 2870 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.221:6443/api/v1/nodes\": dial tcp 172.31.19.221:6443: connect: connection refused" node="ip-172-31-19-221" Dec 13 01:54:21.474600 containerd[2026]: time="2024-12-13T01:54:21.474433677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-221,Uid:6bcc0ea00c76c6db94879a80b4d2765a,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:21.489164 containerd[2026]: time="2024-12-13T01:54:21.488742033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-221,Uid:ccd46e1cd19bca945df9aecbfcf83851,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:21.499065 containerd[2026]: time="2024-12-13T01:54:21.499006581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-221,Uid:1e84ef2d1688e72fbc5fe0cd1a18d37e,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:21.613458 kubelet[2870]: E1213 01:54:21.613383 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-221?timeout=10s\": dial tcp 172.31.19.221:6443: connect: connection refused" interval="800ms" Dec 13 01:54:21.832352 kubelet[2870]: I1213 01:54:21.831694 2870 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-221" Dec 13 01:54:21.832352 kubelet[2870]: E1213 01:54:21.832203 2870 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.221:6443/api/v1/nodes\": dial tcp 172.31.19.221:6443: connect: connection refused" node="ip-172-31-19-221" Dec 13 01:54:22.052917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056912159.mount: Deactivated successfully. Dec 13 01:54:22.077043 containerd[2026]: time="2024-12-13T01:54:22.076922408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:22.087236 containerd[2026]: time="2024-12-13T01:54:22.086983652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:54:22.090586 containerd[2026]: time="2024-12-13T01:54:22.089851268Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:22.092235 containerd[2026]: time="2024-12-13T01:54:22.092162840Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:22.097550 containerd[2026]: time="2024-12-13T01:54:22.096126032Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:22.099684 containerd[2026]: time="2024-12-13T01:54:22.099625568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:54:22.102077 containerd[2026]: time="2024-12-13T01:54:22.101626460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:54:22.105749 containerd[2026]: time="2024-12-13T01:54:22.105674564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:22.114646 containerd[2026]: time="2024-12-13T01:54:22.113582036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 639.001011ms" Dec 13 01:54:22.118817 containerd[2026]: time="2024-12-13T01:54:22.118728872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 629.821335ms" Dec 13 01:54:22.123753 containerd[2026]: time="2024-12-13T01:54:22.123649388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 624.468555ms" Dec 13 01:54:22.281881 kubelet[2870]: W1213 01:54:22.281748 2870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.221:6443: connect: connection refused Dec 13 01:54:22.281881 kubelet[2870]: E1213 01:54:22.281819 2870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.221:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:54:22.309892 kubelet[2870]: W1213 01:54:22.309744 2870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.221:6443: connect: connection refused Dec 13 01:54:22.309892 kubelet[2870]: E1213 01:54:22.309826 2870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.221:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:54:22.352916 containerd[2026]: time="2024-12-13T01:54:22.349200597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:22.352916 containerd[2026]: time="2024-12-13T01:54:22.349290501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:22.352916 containerd[2026]: time="2024-12-13T01:54:22.349315845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:22.352916 containerd[2026]: time="2024-12-13T01:54:22.349458609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:22.357784 containerd[2026]: time="2024-12-13T01:54:22.357139113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:22.357784 containerd[2026]: time="2024-12-13T01:54:22.357253221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:22.357784 containerd[2026]: time="2024-12-13T01:54:22.357290577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:22.357784 containerd[2026]: time="2024-12-13T01:54:22.357492753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:22.364572 containerd[2026]: time="2024-12-13T01:54:22.364170861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:22.364572 containerd[2026]: time="2024-12-13T01:54:22.364470741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:22.365086 containerd[2026]: time="2024-12-13T01:54:22.364825797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:22.366253 containerd[2026]: time="2024-12-13T01:54:22.366134997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:22.400922 systemd[1]: Started cri-containerd-ff70315bdbc65fa67ea0d60ae54876afc99014934286b4ffce9de2b60346ddcb.scope - libcontainer container ff70315bdbc65fa67ea0d60ae54876afc99014934286b4ffce9de2b60346ddcb. Dec 13 01:54:22.414950 kubelet[2870]: E1213 01:54:22.414157 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-221?timeout=10s\": dial tcp 172.31.19.221:6443: connect: connection refused" interval="1.6s" Dec 13 01:54:22.414764 systemd[1]: Started cri-containerd-9bdb23e389bf8ca6d6cf61d11baa1234d9be0d08222a5940087a65eba89f2061.scope - libcontainer container 9bdb23e389bf8ca6d6cf61d11baa1234d9be0d08222a5940087a65eba89f2061. Dec 13 01:54:22.430701 systemd[1]: Started cri-containerd-6ccce25b1009e62690c63f4153012e35259c61aa49ebf0e5e51734e2cd29aa56.scope - libcontainer container 6ccce25b1009e62690c63f4153012e35259c61aa49ebf0e5e51734e2cd29aa56. Dec 13 01:54:22.478852 kubelet[2870]: W1213 01:54:22.478299 2870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-221&limit=500&resourceVersion=0": dial tcp 172.31.19.221:6443: connect: connection refused Dec 13 01:54:22.478852 kubelet[2870]: E1213 01:54:22.478419 2870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-221&limit=500&resourceVersion=0\": dial tcp 172.31.19.221:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:54:22.566257 containerd[2026]: time="2024-12-13T01:54:22.565077166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-221,Uid:1e84ef2d1688e72fbc5fe0cd1a18d37e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff70315bdbc65fa67ea0d60ae54876afc99014934286b4ffce9de2b60346ddcb\"" Dec 13 01:54:22.574812 containerd[2026]: time="2024-12-13T01:54:22.573692146Z" level=info msg="CreateContainer within sandbox \"ff70315bdbc65fa67ea0d60ae54876afc99014934286b4ffce9de2b60346ddcb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:54:22.574974 kubelet[2870]: W1213 01:54:22.574684 2870 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.221:6443: connect: connection refused Dec 13 01:54:22.575174 kubelet[2870]: E1213 01:54:22.575123 2870 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.221:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:54:22.581294 containerd[2026]: time="2024-12-13T01:54:22.581210843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-221,Uid:6bcc0ea00c76c6db94879a80b4d2765a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bdb23e389bf8ca6d6cf61d11baa1234d9be0d08222a5940087a65eba89f2061\"" Dec 13 01:54:22.588849 containerd[2026]: time="2024-12-13T01:54:22.588787823Z" level=info msg="CreateContainer within sandbox \"9bdb23e389bf8ca6d6cf61d11baa1234d9be0d08222a5940087a65eba89f2061\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:54:22.593963 containerd[2026]: time="2024-12-13T01:54:22.593824547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-221,Uid:ccd46e1cd19bca945df9aecbfcf83851,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ccce25b1009e62690c63f4153012e35259c61aa49ebf0e5e51734e2cd29aa56\"" Dec 13 01:54:22.602310 containerd[2026]: time="2024-12-13T01:54:22.602206967Z" level=info msg="CreateContainer within sandbox \"6ccce25b1009e62690c63f4153012e35259c61aa49ebf0e5e51734e2cd29aa56\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:54:22.634002 containerd[2026]: time="2024-12-13T01:54:22.633932291Z" level=info msg="CreateContainer within sandbox \"9bdb23e389bf8ca6d6cf61d11baa1234d9be0d08222a5940087a65eba89f2061\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c569b479f4740bc38620d0fbd48b11dfc9e21e02e33841a5d21558be681e386\"" Dec 13 01:54:22.635658 containerd[2026]: time="2024-12-13T01:54:22.635217275Z" level=info msg="StartContainer for \"3c569b479f4740bc38620d0fbd48b11dfc9e21e02e33841a5d21558be681e386\"" Dec 13 01:54:22.636663 containerd[2026]: time="2024-12-13T01:54:22.636599231Z" level=info msg="CreateContainer within sandbox \"ff70315bdbc65fa67ea0d60ae54876afc99014934286b4ffce9de2b60346ddcb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a00f67152a1a109d332155f9ffb46876e84fad7d75c47304a80bd90ed2525da9\"" Dec 13 01:54:22.637121 kubelet[2870]: I1213 01:54:22.637079 2870 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-221" Dec 13 01:54:22.637983 kubelet[2870]: E1213 01:54:22.637508 2870 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.221:6443/api/v1/nodes\": dial tcp 172.31.19.221:6443: connect: connection refused" node="ip-172-31-19-221" Dec 13 01:54:22.639039 containerd[2026]: time="2024-12-13T01:54:22.638228411Z" level=info msg="StartContainer for \"a00f67152a1a109d332155f9ffb46876e84fad7d75c47304a80bd90ed2525da9\"" Dec 13 01:54:22.657371 containerd[2026]: time="2024-12-13T01:54:22.657312575Z" level=info msg="CreateContainer within sandbox \"6ccce25b1009e62690c63f4153012e35259c61aa49ebf0e5e51734e2cd29aa56\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"21725b2a0710e8c62139ecfdf7082eb33b735b947d4de92b5b9186cdb8b58e99\"" Dec 13 01:54:22.658478 containerd[2026]: time="2024-12-13T01:54:22.658419023Z" level=info msg="StartContainer for \"21725b2a0710e8c62139ecfdf7082eb33b735b947d4de92b5b9186cdb8b58e99\"" Dec 13 01:54:22.696335 systemd[1]: Started cri-containerd-3c569b479f4740bc38620d0fbd48b11dfc9e21e02e33841a5d21558be681e386.scope - libcontainer container 3c569b479f4740bc38620d0fbd48b11dfc9e21e02e33841a5d21558be681e386. Dec 13 01:54:22.735942 systemd[1]: Started cri-containerd-a00f67152a1a109d332155f9ffb46876e84fad7d75c47304a80bd90ed2525da9.scope - libcontainer container a00f67152a1a109d332155f9ffb46876e84fad7d75c47304a80bd90ed2525da9. Dec 13 01:54:22.757900 systemd[1]: Started cri-containerd-21725b2a0710e8c62139ecfdf7082eb33b735b947d4de92b5b9186cdb8b58e99.scope - libcontainer container 21725b2a0710e8c62139ecfdf7082eb33b735b947d4de92b5b9186cdb8b58e99. Dec 13 01:54:22.852980 containerd[2026]: time="2024-12-13T01:54:22.852918120Z" level=info msg="StartContainer for \"3c569b479f4740bc38620d0fbd48b11dfc9e21e02e33841a5d21558be681e386\" returns successfully" Dec 13 01:54:22.900700 containerd[2026]: time="2024-12-13T01:54:22.898589652Z" level=info msg="StartContainer for \"a00f67152a1a109d332155f9ffb46876e84fad7d75c47304a80bd90ed2525da9\" returns successfully" Dec 13 01:54:22.942890 containerd[2026]: time="2024-12-13T01:54:22.942249288Z" level=info msg="StartContainer for \"21725b2a0710e8c62139ecfdf7082eb33b735b947d4de92b5b9186cdb8b58e99\" returns successfully" Dec 13 01:54:23.709648 update_engine[2005]: I20241213 01:54:23.709560 2005 update_attempter.cc:509] Updating boot flags... Dec 13 01:54:23.849779 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3153) Dec 13 01:54:24.241967 kubelet[2870]: I1213 01:54:24.241113 2870 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-221" Dec 13 01:54:26.980307 kubelet[2870]: I1213 01:54:26.980018 2870 apiserver.go:52] "Watching apiserver" Dec 13 01:54:27.006255 kubelet[2870]: I1213 01:54:27.006046 2870 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:54:27.071861 kubelet[2870]: E1213 01:54:27.071793 2870 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-221\" not found" node="ip-172-31-19-221" Dec 13 01:54:27.100076 kubelet[2870]: E1213 01:54:27.099896 2870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-221.181099b569fb193b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-221,UID:ip-172-31-19-221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-221,},FirstTimestamp:2024-12-13 01:54:20.983834939 +0000 UTC m=+1.052054827,LastTimestamp:2024-12-13 01:54:20.983834939 +0000 UTC m=+1.052054827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-221,}" Dec 13 01:54:27.159514 kubelet[2870]: E1213 01:54:27.158304 2870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-221.181099b56e3f019b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-221,UID:ip-172-31-19-221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-19-221,},FirstTimestamp:2024-12-13 01:54:21.055394203 +0000 UTC m=+1.123614091,LastTimestamp:2024-12-13 01:54:21.055394203 +0000 UTC m=+1.123614091,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-221,}" Dec 13 01:54:27.167740 kubelet[2870]: I1213 01:54:27.167612 2870 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-221" Dec 13 01:54:27.221985 kubelet[2870]: E1213 01:54:27.221597 2870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-221.181099b56eb75b2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-221,UID:ip-172-31-19-221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-19-221 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-19-221,},FirstTimestamp:2024-12-13 01:54:21.063281455 +0000 UTC m=+1.131501331,LastTimestamp:2024-12-13 01:54:21.063281455 +0000 UTC m=+1.131501331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-221,}" Dec 13 01:54:27.285471 kubelet[2870]: E1213 01:54:27.284919 2870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-221.181099b56eb776bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-221,UID:ip-172-31-19-221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-19-221 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-19-221,},FirstTimestamp:2024-12-13 01:54:21.063288511 +0000 UTC m=+1.131508387,LastTimestamp:2024-12-13 01:54:21.063288511 +0000 UTC m=+1.131508387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-221,}" Dec 13 01:54:27.349858 kubelet[2870]: E1213 01:54:27.348984 2870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-221.181099b56eb787f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-221,UID:ip-172-31-19-221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ip-172-31-19-221 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ip-172-31-19-221,},FirstTimestamp:2024-12-13 01:54:21.063292915 +0000 UTC m=+1.131512791,LastTimestamp:2024-12-13 01:54:21.063292915 +0000 UTC m=+1.131512791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-221,}" Dec 13 01:54:27.417883 kubelet[2870]: E1213 01:54:27.415863 2870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-221.181099b5728e2fe3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-221,UID:ip-172-31-19-221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ip-172-31-19-221,},FirstTimestamp:2024-12-13 01:54:21.127692259 +0000 UTC m=+1.195912135,LastTimestamp:2024-12-13 01:54:21.127692259 +0000 UTC m=+1.195912135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-221,}" Dec 13 01:54:29.426629 systemd[1]: Reloading requested from client PID 3238 ('systemctl') (unit session-9.scope)... Dec 13 01:54:29.427182 systemd[1]: Reloading... Dec 13 01:54:29.714730 zram_generator::config[3284]: No configuration found. Dec 13 01:54:29.979179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:30.206432 systemd[1]: Reloading finished in 778 ms. Dec 13 01:54:30.311840 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:30.331597 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:54:30.332782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:30.333048 systemd[1]: kubelet.service: Consumed 1.819s CPU time, 115.0M memory peak, 0B memory swap peak. Dec 13 01:54:30.349722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:30.714815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:30.715273 (kubelet)[3338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:54:30.814172 kubelet[3338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:30.814172 kubelet[3338]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:54:30.814172 kubelet[3338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:30.815767 kubelet[3338]: I1213 01:54:30.814353 3338 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:54:30.832292 kubelet[3338]: I1213 01:54:30.832229 3338 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:54:30.832292 kubelet[3338]: I1213 01:54:30.832277 3338 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:54:30.833604 kubelet[3338]: I1213 01:54:30.832782 3338 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:54:30.835856 kubelet[3338]: I1213 01:54:30.835780 3338 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:54:30.840369 kubelet[3338]: I1213 01:54:30.839960 3338 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:30.855669 kubelet[3338]: E1213 01:54:30.854355 3338 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:54:30.855669 kubelet[3338]: I1213 01:54:30.854425 3338 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:54:30.864629 kubelet[3338]: I1213 01:54:30.864573 3338 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:54:30.864932 kubelet[3338]: I1213 01:54:30.864846 3338 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:54:30.865168 kubelet[3338]: I1213 01:54:30.865082 3338 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:54:30.865487 kubelet[3338]: I1213 01:54:30.865159 3338 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-221","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:54:30.865750 kubelet[3338]: I1213 01:54:30.865503 3338 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:54:30.865750 kubelet[3338]: I1213 01:54:30.865554 3338 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:54:30.865750 kubelet[3338]: I1213 01:54:30.865625 3338 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:30.865922 kubelet[3338]: I1213 01:54:30.865896 3338 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:54:30.865984 kubelet[3338]: I1213 01:54:30.865926 3338 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:54:30.865984 kubelet[3338]: I1213 01:54:30.865977 3338 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:54:30.867980 kubelet[3338]: I1213 01:54:30.865999 3338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:54:30.868695 kubelet[3338]: I1213 01:54:30.868438 3338 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:54:30.870016 kubelet[3338]: I1213 01:54:30.869980 3338 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:54:30.874453 kubelet[3338]: I1213 01:54:30.874403 3338 server.go:1269] "Started kubelet" Dec 13 01:54:30.889034 kubelet[3338]: I1213 01:54:30.874730 3338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:54:30.889240 kubelet[3338]: I1213 01:54:30.889201 3338 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:54:30.901646 kubelet[3338]: I1213 01:54:30.900717 3338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:54:30.911775 kubelet[3338]: I1213 01:54:30.911694 3338 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:54:30.916562 kubelet[3338]: I1213 01:54:30.913722 3338 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:54:30.916562 kubelet[3338]: I1213 01:54:30.915974 3338 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:54:30.916734 kubelet[3338]: I1213 01:54:30.916608 3338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:54:30.917981 kubelet[3338]: E1213 01:54:30.917939 3338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-221\" not found" Dec 13 01:54:30.923567 kubelet[3338]: I1213 01:54:30.923478 3338 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:54:30.923830 kubelet[3338]: I1213 01:54:30.923785 3338 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:54:30.952290 kubelet[3338]: I1213 01:54:30.951588 3338 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:54:30.955260 kubelet[3338]: I1213 01:54:30.954898 3338 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:54:30.987504 kubelet[3338]: I1213 01:54:30.982365 3338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:54:30.992349 kubelet[3338]: I1213 01:54:30.991864 3338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:54:30.992349 kubelet[3338]: I1213 01:54:30.991917 3338 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:54:30.992349 kubelet[3338]: I1213 01:54:30.991948 3338 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:54:30.992349 kubelet[3338]: E1213 01:54:30.992016 3338 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:54:30.993931 kubelet[3338]: I1213 01:54:30.993492 3338 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:54:31.031334 kubelet[3338]: E1213 01:54:31.031294 3338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-221\" not found" Dec 13 01:54:31.096223 kubelet[3338]: E1213 01:54:31.096174 3338 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:54:31.161438 kubelet[3338]: I1213 01:54:31.161369 3338 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:54:31.161722 kubelet[3338]: I1213 01:54:31.161698 3338 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:54:31.161831 kubelet[3338]: I1213 01:54:31.161813 3338 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:31.162168 kubelet[3338]: I1213 01:54:31.162143 3338 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:54:31.162302 kubelet[3338]: I1213 01:54:31.162263 3338 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:54:31.162402 kubelet[3338]: I1213 01:54:31.162385 3338 policy_none.go:49] "None policy: Start" Dec 13 01:54:31.165146 kubelet[3338]: I1213 01:54:31.165062 3338 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:54:31.165472 kubelet[3338]: I1213 01:54:31.165426 3338 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:54:31.166071 kubelet[3338]: I1213 01:54:31.166047 3338 state_mem.go:75] "Updated machine memory state" Dec 13 01:54:31.176464 kubelet[3338]: I1213 01:54:31.176425 3338 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:54:31.181095 kubelet[3338]: I1213 01:54:31.180045 3338 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:54:31.181305 kubelet[3338]: I1213 01:54:31.181253 3338 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:54:31.181976 kubelet[3338]: I1213 01:54:31.181950 3338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:54:31.313131 kubelet[3338]: I1213 01:54:31.312099 3338 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-221" Dec 13 01:54:31.320462 kubelet[3338]: E1213 01:54:31.320232 3338 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-221\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-221" Dec 13 01:54:31.321829 kubelet[3338]: E1213 01:54:31.321400 3338 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-19-221\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:31.325915 kubelet[3338]: I1213 01:54:31.324763 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bcc0ea00c76c6db94879a80b4d2765a-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-221\" (UID: \"6bcc0ea00c76c6db94879a80b4d2765a\") " pod="kube-system/kube-apiserver-ip-172-31-19-221" Dec 13 01:54:31.325915 kubelet[3338]: I1213 01:54:31.324857 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bcc0ea00c76c6db94879a80b4d2765a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-221\" (UID: \"6bcc0ea00c76c6db94879a80b4d2765a\") " pod="kube-system/kube-apiserver-ip-172-31-19-221" Dec 13 01:54:31.325915 kubelet[3338]: I1213 01:54:31.324914 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:31.325915 kubelet[3338]: I1213 01:54:31.324952 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:31.325915 kubelet[3338]: I1213 01:54:31.325001 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bcc0ea00c76c6db94879a80b4d2765a-ca-certs\") pod \"kube-apiserver-ip-172-31-19-221\" (UID: \"6bcc0ea00c76c6db94879a80b4d2765a\") " pod="kube-system/kube-apiserver-ip-172-31-19-221" Dec 13 01:54:31.326298 kubelet[3338]: I1213 01:54:31.325043 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:31.326298 kubelet[3338]: I1213 01:54:31.325083 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:31.326298 kubelet[3338]: I1213 01:54:31.325124 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccd46e1cd19bca945df9aecbfcf83851-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-221\" (UID: \"ccd46e1cd19bca945df9aecbfcf83851\") " pod="kube-system/kube-controller-manager-ip-172-31-19-221" Dec 13 01:54:31.326298 kubelet[3338]: I1213 01:54:31.325170 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e84ef2d1688e72fbc5fe0cd1a18d37e-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-221\" (UID: \"1e84ef2d1688e72fbc5fe0cd1a18d37e\") " pod="kube-system/kube-scheduler-ip-172-31-19-221" Dec 13 01:54:31.336040 kubelet[3338]: I1213 01:54:31.335651 3338 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-19-221" Dec 13 01:54:31.336040 kubelet[3338]: I1213 01:54:31.335783 3338 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-221" Dec 13 01:54:31.883329 kubelet[3338]: I1213 01:54:31.883269 3338 apiserver.go:52] "Watching apiserver" Dec 13 01:54:31.924649 kubelet[3338]: I1213 01:54:31.924567 3338 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:54:32.132575 kubelet[3338]: E1213 01:54:32.132287 3338 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-221\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-221" Dec 13 01:54:32.271311 kubelet[3338]: I1213 01:54:32.271007 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-221" podStartSLOduration=4.270980731 podStartE2EDuration="4.270980731s" podCreationTimestamp="2024-12-13 01:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:32.215747994 +0000 UTC m=+1.488331664" watchObservedRunningTime="2024-12-13 01:54:32.270980731 +0000 UTC m=+1.543564377" Dec 13 01:54:32.318138 kubelet[3338]: I1213 01:54:32.315908 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-221" podStartSLOduration=1.315874519 podStartE2EDuration="1.315874519s" podCreationTimestamp="2024-12-13 01:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:32.273168019 +0000 UTC m=+1.545751689" watchObservedRunningTime="2024-12-13 01:54:32.315874519 +0000 UTC m=+1.588458165" Dec 13 01:54:33.433580 kubelet[3338]: I1213 01:54:33.433262 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-221" podStartSLOduration=5.433242524 podStartE2EDuration="5.433242524s" podCreationTimestamp="2024-12-13 01:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:32.318964375 +0000 UTC m=+1.591571409" watchObservedRunningTime="2024-12-13 01:54:33.433242524 +0000 UTC m=+2.705826194" Dec 13 01:54:35.326698 kubelet[3338]: I1213 01:54:35.326634 3338 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:54:35.331831 containerd[2026]: time="2024-12-13T01:54:35.331074706Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:54:35.332882 kubelet[3338]: I1213 01:54:35.331511 3338 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:54:35.704361 systemd[1]: Created slice kubepods-besteffort-pod292563b3_a1f5_4051_a3a7_c43a21cc4d3f.slice - libcontainer container kubepods-besteffort-pod292563b3_a1f5_4051_a3a7_c43a21cc4d3f.slice. Dec 13 01:54:35.757371 kubelet[3338]: I1213 01:54:35.756506 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/292563b3-a1f5-4051-a3a7-c43a21cc4d3f-xtables-lock\") pod \"kube-proxy-b5w6f\" (UID: \"292563b3-a1f5-4051-a3a7-c43a21cc4d3f\") " pod="kube-system/kube-proxy-b5w6f" Dec 13 01:54:35.757371 kubelet[3338]: I1213 01:54:35.756711 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/292563b3-a1f5-4051-a3a7-c43a21cc4d3f-lib-modules\") pod \"kube-proxy-b5w6f\" (UID: \"292563b3-a1f5-4051-a3a7-c43a21cc4d3f\") " pod="kube-system/kube-proxy-b5w6f" Dec 13 01:54:35.757371 kubelet[3338]: I1213 01:54:35.756820 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/292563b3-a1f5-4051-a3a7-c43a21cc4d3f-kube-proxy\") pod \"kube-proxy-b5w6f\" (UID: \"292563b3-a1f5-4051-a3a7-c43a21cc4d3f\") " pod="kube-system/kube-proxy-b5w6f" Dec 13 01:54:35.757371 kubelet[3338]: I1213 01:54:35.756881 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swx4n\" (UniqueName: \"kubernetes.io/projected/292563b3-a1f5-4051-a3a7-c43a21cc4d3f-kube-api-access-swx4n\") pod \"kube-proxy-b5w6f\" (UID: \"292563b3-a1f5-4051-a3a7-c43a21cc4d3f\") " pod="kube-system/kube-proxy-b5w6f" Dec 13 01:54:35.943543 kubelet[3338]: E1213 01:54:35.943057 3338 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:54:35.943543 kubelet[3338]: E1213 01:54:35.943113 3338 projected.go:194] Error preparing data for projected volume kube-api-access-swx4n for pod kube-system/kube-proxy-b5w6f: configmap "kube-root-ca.crt" not found Dec 13 01:54:35.943543 kubelet[3338]: E1213 01:54:35.943224 3338 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/292563b3-a1f5-4051-a3a7-c43a21cc4d3f-kube-api-access-swx4n podName:292563b3-a1f5-4051-a3a7-c43a21cc4d3f nodeName:}" failed. No retries permitted until 2024-12-13 01:54:36.443192513 +0000 UTC m=+5.715776159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-swx4n" (UniqueName: "kubernetes.io/projected/292563b3-a1f5-4051-a3a7-c43a21cc4d3f-kube-api-access-swx4n") pod "kube-proxy-b5w6f" (UID: "292563b3-a1f5-4051-a3a7-c43a21cc4d3f") : configmap "kube-root-ca.crt" not found Dec 13 01:54:36.238878 systemd[1]: Created slice kubepods-besteffort-pod7f6bdc06_f29f_4a37_bca7_d12e8ab50fe2.slice - libcontainer container kubepods-besteffort-pod7f6bdc06_f29f_4a37_bca7_d12e8ab50fe2.slice. Dec 13 01:54:36.272706 kubelet[3338]: I1213 01:54:36.269225 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f6bdc06-f29f-4a37-bca7-d12e8ab50fe2-var-lib-calico\") pod \"tigera-operator-76c4976dd7-mj49g\" (UID: \"7f6bdc06-f29f-4a37-bca7-d12e8ab50fe2\") " pod="tigera-operator/tigera-operator-76c4976dd7-mj49g" Dec 13 01:54:36.272706 kubelet[3338]: I1213 01:54:36.269311 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpx6\" (UniqueName: \"kubernetes.io/projected/7f6bdc06-f29f-4a37-bca7-d12e8ab50fe2-kube-api-access-blpx6\") pod \"tigera-operator-76c4976dd7-mj49g\" (UID: \"7f6bdc06-f29f-4a37-bca7-d12e8ab50fe2\") " pod="tigera-operator/tigera-operator-76c4976dd7-mj49g" Dec 13 01:54:36.546807 containerd[2026]: time="2024-12-13T01:54:36.546658140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-mj49g,Uid:7f6bdc06-f29f-4a37-bca7-d12e8ab50fe2,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:54:36.620661 containerd[2026]: time="2024-12-13T01:54:36.620607348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5w6f,Uid:292563b3-a1f5-4051-a3a7-c43a21cc4d3f,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:36.628408 containerd[2026]: time="2024-12-13T01:54:36.624881340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:36.628408 containerd[2026]: time="2024-12-13T01:54:36.624960204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:36.628408 containerd[2026]: time="2024-12-13T01:54:36.624985872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:36.628408 containerd[2026]: time="2024-12-13T01:54:36.625165368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:36.699891 systemd[1]: Started cri-containerd-23c2e20a1ba3f6183afc7dfa0d818e20e56b364c570d1bfc7b4d67716e0f0ed0.scope - libcontainer container 23c2e20a1ba3f6183afc7dfa0d818e20e56b364c570d1bfc7b4d67716e0f0ed0. Dec 13 01:54:36.743689 containerd[2026]: time="2024-12-13T01:54:36.743348137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:36.743689 containerd[2026]: time="2024-12-13T01:54:36.743490217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:36.744870 containerd[2026]: time="2024-12-13T01:54:36.744744097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:36.745698 containerd[2026]: time="2024-12-13T01:54:36.745379041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:36.820891 systemd[1]: Started cri-containerd-ba523b3bdb9bbdec16168ca79619a45d89eb9d87a3e1bb7eb256c678e2c0aa3d.scope - libcontainer container ba523b3bdb9bbdec16168ca79619a45d89eb9d87a3e1bb7eb256c678e2c0aa3d. Dec 13 01:54:36.863857 containerd[2026]: time="2024-12-13T01:54:36.863731081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-mj49g,Uid:7f6bdc06-f29f-4a37-bca7-d12e8ab50fe2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"23c2e20a1ba3f6183afc7dfa0d818e20e56b364c570d1bfc7b4d67716e0f0ed0\"" Dec 13 01:54:36.872368 containerd[2026]: time="2024-12-13T01:54:36.872255401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:54:36.924516 containerd[2026]: time="2024-12-13T01:54:36.924328442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5w6f,Uid:292563b3-a1f5-4051-a3a7-c43a21cc4d3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba523b3bdb9bbdec16168ca79619a45d89eb9d87a3e1bb7eb256c678e2c0aa3d\"" Dec 13 01:54:36.939082 containerd[2026]: time="2024-12-13T01:54:36.938846378Z" level=info msg="CreateContainer within sandbox \"ba523b3bdb9bbdec16168ca79619a45d89eb9d87a3e1bb7eb256c678e2c0aa3d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:54:36.981780 containerd[2026]: time="2024-12-13T01:54:36.981707366Z" level=info msg="CreateContainer within sandbox \"ba523b3bdb9bbdec16168ca79619a45d89eb9d87a3e1bb7eb256c678e2c0aa3d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"257fb99854c37fd15cc00a2678377cd250ced41143c5267dc97ccbd3bc1bae48\"" Dec 13 01:54:36.984282 containerd[2026]: time="2024-12-13T01:54:36.984092618Z" level=info msg="StartContainer for \"257fb99854c37fd15cc00a2678377cd250ced41143c5267dc97ccbd3bc1bae48\"" Dec 13 01:54:37.078681 systemd[1]: Started cri-containerd-257fb99854c37fd15cc00a2678377cd250ced41143c5267dc97ccbd3bc1bae48.scope - libcontainer container 257fb99854c37fd15cc00a2678377cd250ced41143c5267dc97ccbd3bc1bae48. Dec 13 01:54:37.166896 containerd[2026]: time="2024-12-13T01:54:37.164939303Z" level=info msg="StartContainer for \"257fb99854c37fd15cc00a2678377cd250ced41143c5267dc97ccbd3bc1bae48\" returns successfully" Dec 13 01:54:37.283981 sudo[2377]: pam_unix(sudo:session): session closed for user root Dec 13 01:54:37.311424 sshd[2374]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:37.318240 systemd[1]: sshd@8-172.31.19.221:22-139.178.68.195:57182.service: Deactivated successfully. Dec 13 01:54:37.324757 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:54:37.326674 systemd[1]: session-9.scope: Consumed 9.962s CPU time, 149.1M memory peak, 0B memory swap peak. Dec 13 01:54:37.328369 systemd-logind[2004]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:54:37.333345 systemd-logind[2004]: Removed session 9. Dec 13 01:54:39.195397 kubelet[3338]: I1213 01:54:39.195290 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b5w6f" podStartSLOduration=4.195265861 podStartE2EDuration="4.195265861s" podCreationTimestamp="2024-12-13 01:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:38.18995682 +0000 UTC m=+7.462540550" watchObservedRunningTime="2024-12-13 01:54:39.195265861 +0000 UTC m=+8.467849507" Dec 13 01:54:39.594171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715612215.mount: Deactivated successfully. Dec 13 01:54:41.031484 containerd[2026]: time="2024-12-13T01:54:41.031250726Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:41.033104 containerd[2026]: time="2024-12-13T01:54:41.033027770Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125992" Dec 13 01:54:41.034594 containerd[2026]: time="2024-12-13T01:54:41.034493570Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:41.045575 containerd[2026]: time="2024-12-13T01:54:41.044864942Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:41.048987 containerd[2026]: time="2024-12-13T01:54:41.048922538Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 4.176563865s" Dec 13 01:54:41.049215 containerd[2026]: time="2024-12-13T01:54:41.049180106Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:54:41.055378 containerd[2026]: time="2024-12-13T01:54:41.055308170Z" level=info msg="CreateContainer within sandbox \"23c2e20a1ba3f6183afc7dfa0d818e20e56b364c570d1bfc7b4d67716e0f0ed0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:54:41.085401 containerd[2026]: time="2024-12-13T01:54:41.085310894Z" level=info msg="CreateContainer within sandbox \"23c2e20a1ba3f6183afc7dfa0d818e20e56b364c570d1bfc7b4d67716e0f0ed0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"36376f3744ba6a4920a093d88a10036709fcd530a9a6b6ee020a0b4f0caece5e\"" Dec 13 01:54:41.086512 containerd[2026]: time="2024-12-13T01:54:41.086457014Z" level=info msg="StartContainer for \"36376f3744ba6a4920a093d88a10036709fcd530a9a6b6ee020a0b4f0caece5e\"" Dec 13 01:54:41.138855 systemd[1]: Started cri-containerd-36376f3744ba6a4920a093d88a10036709fcd530a9a6b6ee020a0b4f0caece5e.scope - libcontainer container 36376f3744ba6a4920a093d88a10036709fcd530a9a6b6ee020a0b4f0caece5e. Dec 13 01:54:41.201313 containerd[2026]: time="2024-12-13T01:54:41.200776995Z" level=info msg="StartContainer for \"36376f3744ba6a4920a093d88a10036709fcd530a9a6b6ee020a0b4f0caece5e\" returns successfully" Dec 13 01:54:42.226483 kubelet[3338]: I1213 01:54:42.226365 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-mj49g" podStartSLOduration=2.044607155 podStartE2EDuration="6.226340644s" podCreationTimestamp="2024-12-13 01:54:36 +0000 UTC" firstStartedPulling="2024-12-13 01:54:36.869168389 +0000 UTC m=+6.141752035" lastFinishedPulling="2024-12-13 01:54:41.050901878 +0000 UTC m=+10.323485524" observedRunningTime="2024-12-13 01:54:42.2259868 +0000 UTC m=+11.498570470" watchObservedRunningTime="2024-12-13 01:54:42.226340644 +0000 UTC m=+11.498924290" Dec 13 01:54:46.617132 systemd[1]: Created slice kubepods-besteffort-pod71577281_4dfe_42f6_87b4_82e229c6bff3.slice - libcontainer container kubepods-besteffort-pod71577281_4dfe_42f6_87b4_82e229c6bff3.slice. Dec 13 01:54:46.638551 kubelet[3338]: I1213 01:54:46.638449 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71577281-4dfe-42f6-87b4-82e229c6bff3-tigera-ca-bundle\") pod \"calico-typha-7f8696967d-c84zf\" (UID: \"71577281-4dfe-42f6-87b4-82e229c6bff3\") " pod="calico-system/calico-typha-7f8696967d-c84zf" Dec 13 01:54:46.639955 kubelet[3338]: I1213 01:54:46.639692 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/71577281-4dfe-42f6-87b4-82e229c6bff3-typha-certs\") pod \"calico-typha-7f8696967d-c84zf\" (UID: \"71577281-4dfe-42f6-87b4-82e229c6bff3\") " pod="calico-system/calico-typha-7f8696967d-c84zf" Dec 13 01:54:46.639955 kubelet[3338]: I1213 01:54:46.639781 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lq5k\" (UniqueName: \"kubernetes.io/projected/71577281-4dfe-42f6-87b4-82e229c6bff3-kube-api-access-2lq5k\") pod \"calico-typha-7f8696967d-c84zf\" (UID: \"71577281-4dfe-42f6-87b4-82e229c6bff3\") " pod="calico-system/calico-typha-7f8696967d-c84zf" Dec 13 01:54:46.925200 systemd[1]: Created slice kubepods-besteffort-pod4bbe889b_dfef_446f_975a_a8889f4343d6.slice - libcontainer container kubepods-besteffort-pod4bbe889b_dfef_446f_975a_a8889f4343d6.slice. Dec 13 01:54:46.932948 containerd[2026]: time="2024-12-13T01:54:46.932868539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f8696967d-c84zf,Uid:71577281-4dfe-42f6-87b4-82e229c6bff3,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:46.941853 kubelet[3338]: I1213 01:54:46.941719 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4bbe889b-dfef-446f-975a-a8889f4343d6-var-run-calico\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942040 kubelet[3338]: I1213 01:54:46.941867 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bbe889b-dfef-446f-975a-a8889f4343d6-lib-modules\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942040 kubelet[3338]: I1213 01:54:46.941907 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4bbe889b-dfef-446f-975a-a8889f4343d6-node-certs\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942040 kubelet[3338]: I1213 01:54:46.941944 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4bbe889b-dfef-446f-975a-a8889f4343d6-cni-log-dir\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942040 kubelet[3338]: I1213 01:54:46.941979 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4bbe889b-dfef-446f-975a-a8889f4343d6-var-lib-calico\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942040 kubelet[3338]: I1213 01:54:46.942015 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79d7q\" (UniqueName: \"kubernetes.io/projected/4bbe889b-dfef-446f-975a-a8889f4343d6-kube-api-access-79d7q\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942291 kubelet[3338]: I1213 01:54:46.942061 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4bbe889b-dfef-446f-975a-a8889f4343d6-cni-net-dir\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942291 kubelet[3338]: I1213 01:54:46.942098 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4bbe889b-dfef-446f-975a-a8889f4343d6-policysync\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942291 kubelet[3338]: I1213 01:54:46.942134 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bbe889b-dfef-446f-975a-a8889f4343d6-tigera-ca-bundle\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942291 kubelet[3338]: I1213 01:54:46.942170 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4bbe889b-dfef-446f-975a-a8889f4343d6-cni-bin-dir\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942291 kubelet[3338]: I1213 01:54:46.942234 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bbe889b-dfef-446f-975a-a8889f4343d6-xtables-lock\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:46.942589 kubelet[3338]: I1213 01:54:46.942274 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4bbe889b-dfef-446f-975a-a8889f4343d6-flexvol-driver-host\") pod \"calico-node-tdkfb\" (UID: \"4bbe889b-dfef-446f-975a-a8889f4343d6\") " pod="calico-system/calico-node-tdkfb" Dec 13 01:54:47.023613 containerd[2026]: time="2024-12-13T01:54:47.022707728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:47.023613 containerd[2026]: time="2024-12-13T01:54:47.022887440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:47.023613 containerd[2026]: time="2024-12-13T01:54:47.022936220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.023613 containerd[2026]: time="2024-12-13T01:54:47.023146748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.046645 kubelet[3338]: E1213 01:54:47.046581 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.046645 kubelet[3338]: W1213 01:54:47.046628 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.046901 kubelet[3338]: E1213 01:54:47.046686 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.047437 kubelet[3338]: E1213 01:54:47.047384 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.047437 kubelet[3338]: W1213 01:54:47.047424 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.050349 kubelet[3338]: E1213 01:54:47.049761 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.051656 kubelet[3338]: E1213 01:54:47.051599 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.051656 kubelet[3338]: W1213 01:54:47.051644 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.053767 kubelet[3338]: E1213 01:54:47.053701 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.053927 kubelet[3338]: E1213 01:54:47.053884 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.053927 kubelet[3338]: W1213 01:54:47.053904 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.054048 kubelet[3338]: E1213 01:54:47.053949 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.057605 kubelet[3338]: E1213 01:54:47.055738 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.057605 kubelet[3338]: W1213 01:54:47.055776 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.057605 kubelet[3338]: E1213 01:54:47.057499 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.060340 kubelet[3338]: E1213 01:54:47.060288 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.060340 kubelet[3338]: W1213 01:54:47.060327 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.061358 kubelet[3338]: E1213 01:54:47.061178 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.064344 kubelet[3338]: E1213 01:54:47.064260 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.064344 kubelet[3338]: W1213 01:54:47.064317 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.066570 kubelet[3338]: E1213 01:54:47.066166 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.071355 kubelet[3338]: E1213 01:54:47.070469 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.071355 kubelet[3338]: W1213 01:54:47.070535 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.071355 kubelet[3338]: E1213 01:54:47.070724 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.073159 kubelet[3338]: E1213 01:54:47.072597 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.073159 kubelet[3338]: W1213 01:54:47.072637 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.074562 kubelet[3338]: E1213 01:54:47.073780 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.074766 kubelet[3338]: W1213 01:54:47.074730 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.074988 kubelet[3338]: E1213 01:54:47.074262 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.075324 kubelet[3338]: E1213 01:54:47.075070 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.078387 kubelet[3338]: E1213 01:54:47.076679 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.078387 kubelet[3338]: W1213 01:54:47.076714 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.080863 kubelet[3338]: E1213 01:54:47.080200 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.081463 kubelet[3338]: E1213 01:54:47.081128 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.081463 kubelet[3338]: W1213 01:54:47.081268 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.081463 kubelet[3338]: E1213 01:54:47.081402 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.082982 kubelet[3338]: E1213 01:54:47.082897 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.082982 kubelet[3338]: W1213 01:54:47.082930 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.084126 kubelet[3338]: E1213 01:54:47.083685 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.086456 kubelet[3338]: E1213 01:54:47.086221 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.086456 kubelet[3338]: W1213 01:54:47.086258 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.089642 kubelet[3338]: E1213 01:54:47.089266 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.089642 kubelet[3338]: W1213 01:54:47.089339 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.092130 kubelet[3338]: E1213 01:54:47.091869 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.092130 kubelet[3338]: W1213 01:54:47.091915 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.092635 kubelet[3338]: E1213 01:54:47.092574 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.093841 kubelet[3338]: E1213 01:54:47.092851 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.094308 kubelet[3338]: E1213 01:54:47.092871 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.094732 kubelet[3338]: E1213 01:54:47.094667 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.095338 kubelet[3338]: W1213 01:54:47.095048 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.095144 systemd[1]: Started cri-containerd-59d107f956c4d34d1f35d5e4cbd7d225f911379f958a4611b13e6dece85e49c6.scope - libcontainer container 59d107f956c4d34d1f35d5e4cbd7d225f911379f958a4611b13e6dece85e49c6. Dec 13 01:54:47.096266 kubelet[3338]: E1213 01:54:47.095836 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.103789 kubelet[3338]: E1213 01:54:47.101700 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.103789 kubelet[3338]: W1213 01:54:47.101735 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.103789 kubelet[3338]: E1213 01:54:47.102204 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.105602 kubelet[3338]: E1213 01:54:47.105379 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.106107 kubelet[3338]: W1213 01:54:47.106069 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.106580 kubelet[3338]: E1213 01:54:47.106380 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.109095 kubelet[3338]: E1213 01:54:47.109035 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.109446 kubelet[3338]: W1213 01:54:47.109390 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.112976 kubelet[3338]: E1213 01:54:47.112107 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j259d" podUID="44b75188-75ae-44a3-965d-98692905f7b3" Dec 13 01:54:47.114204 kubelet[3338]: E1213 01:54:47.113291 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.114204 kubelet[3338]: E1213 01:54:47.113366 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.114204 kubelet[3338]: W1213 01:54:47.113831 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.115285 kubelet[3338]: E1213 01:54:47.114312 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.116916 kubelet[3338]: E1213 01:54:47.115940 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.116916 kubelet[3338]: W1213 01:54:47.116084 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.116916 kubelet[3338]: E1213 01:54:47.116745 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.118670 kubelet[3338]: E1213 01:54:47.118398 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.118670 kubelet[3338]: W1213 01:54:47.118434 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.118670 kubelet[3338]: E1213 01:54:47.118511 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.120545 kubelet[3338]: E1213 01:54:47.119924 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.120545 kubelet[3338]: W1213 01:54:47.119975 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.120545 kubelet[3338]: E1213 01:54:47.120062 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.122234 kubelet[3338]: E1213 01:54:47.121635 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.122234 kubelet[3338]: W1213 01:54:47.121667 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.122234 kubelet[3338]: E1213 01:54:47.121731 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.123249 kubelet[3338]: E1213 01:54:47.123218 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.124595 kubelet[3338]: W1213 01:54:47.123360 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.124595 kubelet[3338]: E1213 01:54:47.123439 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.125435 kubelet[3338]: E1213 01:54:47.125252 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.125435 kubelet[3338]: W1213 01:54:47.125284 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.125621 kubelet[3338]: E1213 01:54:47.125454 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.125826 kubelet[3338]: E1213 01:54:47.125804 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.126003 kubelet[3338]: W1213 01:54:47.125905 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.126003 kubelet[3338]: E1213 01:54:47.125976 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.126795 kubelet[3338]: E1213 01:54:47.126624 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.126795 kubelet[3338]: W1213 01:54:47.126659 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.127600 kubelet[3338]: E1213 01:54:47.127340 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.128596 kubelet[3338]: E1213 01:54:47.127869 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.128596 kubelet[3338]: W1213 01:54:47.127904 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.129253 kubelet[3338]: E1213 01:54:47.128955 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.129614 kubelet[3338]: E1213 01:54:47.129582 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.130542 kubelet[3338]: W1213 01:54:47.130466 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.131388 kubelet[3338]: E1213 01:54:47.131319 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.132168 kubelet[3338]: E1213 01:54:47.131851 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.134559 kubelet[3338]: W1213 01:54:47.132593 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.135647 kubelet[3338]: E1213 01:54:47.135460 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.136122 kubelet[3338]: E1213 01:54:47.135902 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.136122 kubelet[3338]: W1213 01:54:47.135931 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.136639 kubelet[3338]: E1213 01:54:47.136603 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.136845 kubelet[3338]: W1213 01:54:47.136803 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.137580 kubelet[3338]: E1213 01:54:47.137207 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.137580 kubelet[3338]: E1213 01:54:47.137276 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.138543 kubelet[3338]: E1213 01:54:47.138270 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.138543 kubelet[3338]: W1213 01:54:47.138304 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.138543 kubelet[3338]: E1213 01:54:47.138376 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.140408 kubelet[3338]: E1213 01:54:47.140018 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.140408 kubelet[3338]: W1213 01:54:47.140067 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.140408 kubelet[3338]: E1213 01:54:47.140208 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.141707 kubelet[3338]: E1213 01:54:47.141661 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.142005 kubelet[3338]: W1213 01:54:47.141865 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.142005 kubelet[3338]: E1213 01:54:47.141962 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.143899 kubelet[3338]: E1213 01:54:47.143686 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.143899 kubelet[3338]: W1213 01:54:47.143725 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.143899 kubelet[3338]: E1213 01:54:47.143802 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.144813 kubelet[3338]: E1213 01:54:47.144586 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.144813 kubelet[3338]: W1213 01:54:47.144619 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.145465 kubelet[3338]: E1213 01:54:47.145276 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.145465 kubelet[3338]: W1213 01:54:47.145308 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.146580 kubelet[3338]: E1213 01:54:47.146050 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.146580 kubelet[3338]: E1213 01:54:47.146123 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.147956 kubelet[3338]: E1213 01:54:47.147884 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.147956 kubelet[3338]: W1213 01:54:47.147916 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.149602 kubelet[3338]: E1213 01:54:47.148583 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.152763 kubelet[3338]: E1213 01:54:47.152716 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.152964 kubelet[3338]: W1213 01:54:47.152934 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.154943 kubelet[3338]: E1213 01:54:47.154895 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.155444 kubelet[3338]: W1213 01:54:47.155165 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.157750 kubelet[3338]: E1213 01:54:47.157700 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.158238 kubelet[3338]: W1213 01:54:47.157951 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.160252 kubelet[3338]: E1213 01:54:47.160068 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.160252 kubelet[3338]: E1213 01:54:47.160146 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.160252 kubelet[3338]: E1213 01:54:47.160180 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.161833 kubelet[3338]: E1213 01:54:47.161772 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.162448 kubelet[3338]: W1213 01:54:47.162373 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.165818 kubelet[3338]: E1213 01:54:47.165556 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.166423 kubelet[3338]: E1213 01:54:47.166125 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.166423 kubelet[3338]: W1213 01:54:47.166165 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.166760 kubelet[3338]: E1213 01:54:47.166489 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.169938 kubelet[3338]: E1213 01:54:47.169892 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.170337 kubelet[3338]: W1213 01:54:47.170188 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.171301 kubelet[3338]: E1213 01:54:47.170359 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.171301 kubelet[3338]: E1213 01:54:47.170863 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.171301 kubelet[3338]: W1213 01:54:47.171075 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.171768 kubelet[3338]: E1213 01:54:47.171136 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.172914 kubelet[3338]: E1213 01:54:47.172674 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.172914 kubelet[3338]: W1213 01:54:47.172711 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.172914 kubelet[3338]: E1213 01:54:47.172745 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.173554 kubelet[3338]: E1213 01:54:47.173464 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.173911 kubelet[3338]: W1213 01:54:47.173672 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.173911 kubelet[3338]: E1213 01:54:47.173718 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.174411 kubelet[3338]: E1213 01:54:47.174377 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.174594 kubelet[3338]: W1213 01:54:47.174561 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.174948 kubelet[3338]: E1213 01:54:47.174708 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.175423 kubelet[3338]: E1213 01:54:47.175393 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.175640 kubelet[3338]: W1213 01:54:47.175611 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.175965 kubelet[3338]: E1213 01:54:47.175768 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.177702 kubelet[3338]: E1213 01:54:47.176510 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.178603 kubelet[3338]: W1213 01:54:47.177871 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.178603 kubelet[3338]: E1213 01:54:47.177930 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.179973 kubelet[3338]: E1213 01:54:47.179758 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.180641 kubelet[3338]: W1213 01:54:47.180509 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.181759 kubelet[3338]: E1213 01:54:47.181394 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.183802 kubelet[3338]: E1213 01:54:47.183654 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.185211 kubelet[3338]: W1213 01:54:47.184418 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.185211 kubelet[3338]: E1213 01:54:47.184488 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.190558 kubelet[3338]: E1213 01:54:47.189269 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.190558 kubelet[3338]: W1213 01:54:47.189314 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.190558 kubelet[3338]: E1213 01:54:47.189355 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.192195 kubelet[3338]: E1213 01:54:47.191473 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.192195 kubelet[3338]: W1213 01:54:47.191512 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.192195 kubelet[3338]: E1213 01:54:47.191613 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.197368 kubelet[3338]: E1213 01:54:47.195671 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.197368 kubelet[3338]: W1213 01:54:47.195723 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.197368 kubelet[3338]: E1213 01:54:47.196015 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.200249 kubelet[3338]: E1213 01:54:47.200008 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.200249 kubelet[3338]: W1213 01:54:47.200065 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.200249 kubelet[3338]: E1213 01:54:47.200123 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.201885 kubelet[3338]: E1213 01:54:47.201833 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.201885 kubelet[3338]: W1213 01:54:47.201873 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.202187 kubelet[3338]: E1213 01:54:47.201915 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.203439 kubelet[3338]: E1213 01:54:47.203388 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.203439 kubelet[3338]: W1213 01:54:47.203428 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.203682 kubelet[3338]: E1213 01:54:47.203461 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.205504 kubelet[3338]: E1213 01:54:47.205450 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.205504 kubelet[3338]: W1213 01:54:47.205490 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.205824 kubelet[3338]: E1213 01:54:47.205553 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.207068 kubelet[3338]: E1213 01:54:47.206947 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.207068 kubelet[3338]: W1213 01:54:47.207056 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.207311 kubelet[3338]: E1213 01:54:47.207095 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.208904 kubelet[3338]: E1213 01:54:47.208845 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.208904 kubelet[3338]: W1213 01:54:47.208887 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.209194 kubelet[3338]: E1213 01:54:47.208923 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.211063 kubelet[3338]: E1213 01:54:47.211002 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.211063 kubelet[3338]: W1213 01:54:47.211047 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.211271 kubelet[3338]: E1213 01:54:47.211085 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.213015 kubelet[3338]: E1213 01:54:47.212963 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.213015 kubelet[3338]: W1213 01:54:47.213002 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.213373 kubelet[3338]: E1213 01:54:47.213041 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.214756 kubelet[3338]: E1213 01:54:47.214698 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.214756 kubelet[3338]: W1213 01:54:47.214742 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.214756 kubelet[3338]: E1213 01:54:47.214775 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.217921 kubelet[3338]: E1213 01:54:47.217706 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.217921 kubelet[3338]: W1213 01:54:47.217752 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.217921 kubelet[3338]: E1213 01:54:47.217792 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.219946 kubelet[3338]: E1213 01:54:47.219881 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.219946 kubelet[3338]: W1213 01:54:47.219919 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.219946 kubelet[3338]: E1213 01:54:47.219952 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.220291 kubelet[3338]: I1213 01:54:47.220009 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv7qv\" (UniqueName: \"kubernetes.io/projected/44b75188-75ae-44a3-965d-98692905f7b3-kube-api-access-tv7qv\") pod \"csi-node-driver-j259d\" (UID: \"44b75188-75ae-44a3-965d-98692905f7b3\") " pod="calico-system/csi-node-driver-j259d" Dec 13 01:54:47.222963 kubelet[3338]: E1213 01:54:47.222868 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.222963 kubelet[3338]: W1213 01:54:47.222910 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.222963 kubelet[3338]: E1213 01:54:47.222948 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.224156 kubelet[3338]: I1213 01:54:47.222999 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/44b75188-75ae-44a3-965d-98692905f7b3-varrun\") pod \"csi-node-driver-j259d\" (UID: \"44b75188-75ae-44a3-965d-98692905f7b3\") " pod="calico-system/csi-node-driver-j259d" Dec 13 01:54:47.225422 kubelet[3338]: E1213 01:54:47.225367 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.225422 kubelet[3338]: W1213 01:54:47.225409 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.225422 kubelet[3338]: E1213 01:54:47.225498 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.226196 kubelet[3338]: I1213 01:54:47.225595 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/44b75188-75ae-44a3-965d-98692905f7b3-registration-dir\") pod \"csi-node-driver-j259d\" (UID: \"44b75188-75ae-44a3-965d-98692905f7b3\") " pod="calico-system/csi-node-driver-j259d" Dec 13 01:54:47.228003 kubelet[3338]: E1213 01:54:47.227949 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.228003 kubelet[3338]: W1213 01:54:47.227987 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.228485 kubelet[3338]: E1213 01:54:47.228253 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.228485 kubelet[3338]: E1213 01:54:47.228461 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.228485 kubelet[3338]: W1213 01:54:47.228479 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.229078 kubelet[3338]: E1213 01:54:47.228686 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.229078 kubelet[3338]: E1213 01:54:47.228855 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.229078 kubelet[3338]: W1213 01:54:47.228874 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.229470 kubelet[3338]: E1213 01:54:47.229303 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.229470 kubelet[3338]: I1213 01:54:47.229428 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/44b75188-75ae-44a3-965d-98692905f7b3-socket-dir\") pod \"csi-node-driver-j259d\" (UID: \"44b75188-75ae-44a3-965d-98692905f7b3\") " pod="calico-system/csi-node-driver-j259d" Dec 13 01:54:47.229470 kubelet[3338]: E1213 01:54:47.229459 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.229813 kubelet[3338]: W1213 01:54:47.229479 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.229813 kubelet[3338]: E1213 01:54:47.229591 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.229923 kubelet[3338]: E1213 01:54:47.229870 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.229923 kubelet[3338]: W1213 01:54:47.229887 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.229923 kubelet[3338]: E1213 01:54:47.229910 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.231595 kubelet[3338]: E1213 01:54:47.231005 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.231595 kubelet[3338]: W1213 01:54:47.231044 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.231595 kubelet[3338]: E1213 01:54:47.231092 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.232040 kubelet[3338]: E1213 01:54:47.231974 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.232040 kubelet[3338]: W1213 01:54:47.232010 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.232607 kubelet[3338]: E1213 01:54:47.232071 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.234083 kubelet[3338]: E1213 01:54:47.234025 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.234083 kubelet[3338]: W1213 01:54:47.234077 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.234083 kubelet[3338]: E1213 01:54:47.234124 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.235468 kubelet[3338]: E1213 01:54:47.235064 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.235468 kubelet[3338]: W1213 01:54:47.235101 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.235468 kubelet[3338]: E1213 01:54:47.235138 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.235762 kubelet[3338]: E1213 01:54:47.235648 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.235762 kubelet[3338]: W1213 01:54:47.235670 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.235762 kubelet[3338]: E1213 01:54:47.235695 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.235931 kubelet[3338]: I1213 01:54:47.235763 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44b75188-75ae-44a3-965d-98692905f7b3-kubelet-dir\") pod \"csi-node-driver-j259d\" (UID: \"44b75188-75ae-44a3-965d-98692905f7b3\") " pod="calico-system/csi-node-driver-j259d" Dec 13 01:54:47.236679 kubelet[3338]: E1213 01:54:47.236134 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.236679 kubelet[3338]: W1213 01:54:47.236167 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.236679 kubelet[3338]: E1213 01:54:47.236192 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.236679 kubelet[3338]: E1213 01:54:47.236624 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.236679 kubelet[3338]: W1213 01:54:47.236648 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.236679 kubelet[3338]: E1213 01:54:47.236678 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.251966 containerd[2026]: time="2024-12-13T01:54:47.251878809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdkfb,Uid:4bbe889b-dfef-446f-975a-a8889f4343d6,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:47.336943 containerd[2026]: time="2024-12-13T01:54:47.336392025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:47.336943 containerd[2026]: time="2024-12-13T01:54:47.336502989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:47.337472 containerd[2026]: time="2024-12-13T01:54:47.336898269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.339061 containerd[2026]: time="2024-12-13T01:54:47.338742345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.339784 kubelet[3338]: E1213 01:54:47.339731 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.339784 kubelet[3338]: W1213 01:54:47.339773 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.340034 kubelet[3338]: E1213 01:54:47.339824 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.343425 kubelet[3338]: E1213 01:54:47.343290 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.343425 kubelet[3338]: W1213 01:54:47.343354 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.343425 kubelet[3338]: E1213 01:54:47.343412 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.347066 kubelet[3338]: E1213 01:54:47.346649 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.347066 kubelet[3338]: W1213 01:54:47.346694 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.349175 kubelet[3338]: E1213 01:54:47.348925 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.349175 kubelet[3338]: E1213 01:54:47.349046 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.349175 kubelet[3338]: W1213 01:54:47.349068 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.349175 kubelet[3338]: E1213 01:54:47.349109 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.351781 kubelet[3338]: E1213 01:54:47.351700 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.351781 kubelet[3338]: W1213 01:54:47.351753 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.354628 kubelet[3338]: E1213 01:54:47.353800 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.354628 kubelet[3338]: W1213 01:54:47.353854 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.354628 kubelet[3338]: E1213 01:54:47.354093 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.354628 kubelet[3338]: E1213 01:54:47.354147 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.356366 kubelet[3338]: E1213 01:54:47.356158 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.356366 kubelet[3338]: W1213 01:54:47.356206 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.356718 kubelet[3338]: E1213 01:54:47.356640 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.357125 kubelet[3338]: E1213 01:54:47.357086 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.357125 kubelet[3338]: W1213 01:54:47.357118 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.358451 kubelet[3338]: E1213 01:54:47.357339 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.359936 kubelet[3338]: E1213 01:54:47.359694 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.359936 kubelet[3338]: W1213 01:54:47.359740 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.363583 kubelet[3338]: E1213 01:54:47.362714 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.363583 kubelet[3338]: E1213 01:54:47.363002 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.363583 kubelet[3338]: W1213 01:54:47.363028 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.363969 kubelet[3338]: E1213 01:54:47.363852 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.365920 kubelet[3338]: E1213 01:54:47.365865 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.365920 kubelet[3338]: W1213 01:54:47.365907 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.366139 kubelet[3338]: E1213 01:54:47.366051 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.368286 kubelet[3338]: E1213 01:54:47.367960 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.368286 kubelet[3338]: W1213 01:54:47.367998 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.369067 kubelet[3338]: E1213 01:54:47.368629 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.369067 kubelet[3338]: E1213 01:54:47.368963 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.369067 kubelet[3338]: W1213 01:54:47.368984 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.370255 kubelet[3338]: E1213 01:54:47.369691 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.370876 kubelet[3338]: E1213 01:54:47.370819 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.371022 kubelet[3338]: W1213 01:54:47.370884 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.371232 kubelet[3338]: E1213 01:54:47.371185 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.372864 kubelet[3338]: E1213 01:54:47.372805 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.372864 kubelet[3338]: W1213 01:54:47.372853 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.375001 kubelet[3338]: E1213 01:54:47.374349 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.375989 kubelet[3338]: E1213 01:54:47.375736 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.375989 kubelet[3338]: W1213 01:54:47.375780 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.375989 kubelet[3338]: E1213 01:54:47.375840 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.377570 kubelet[3338]: E1213 01:54:47.377222 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.377570 kubelet[3338]: W1213 01:54:47.377267 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.378509 kubelet[3338]: E1213 01:54:47.378149 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.380798 kubelet[3338]: E1213 01:54:47.380385 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.380798 kubelet[3338]: W1213 01:54:47.380429 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.380798 kubelet[3338]: E1213 01:54:47.380744 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.382478 kubelet[3338]: E1213 01:54:47.382319 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.382478 kubelet[3338]: W1213 01:54:47.382356 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.383740 kubelet[3338]: E1213 01:54:47.383689 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.387661 kubelet[3338]: E1213 01:54:47.387487 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.387661 kubelet[3338]: W1213 01:54:47.387649 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.387931 kubelet[3338]: E1213 01:54:47.387880 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.390752 kubelet[3338]: E1213 01:54:47.390666 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.390752 kubelet[3338]: W1213 01:54:47.390725 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.391237 kubelet[3338]: E1213 01:54:47.391083 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.395466 kubelet[3338]: E1213 01:54:47.395396 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.395466 kubelet[3338]: W1213 01:54:47.395447 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.396738 kubelet[3338]: E1213 01:54:47.396676 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.396738 kubelet[3338]: W1213 01:54:47.396716 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.398152 kubelet[3338]: E1213 01:54:47.398061 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.398930 systemd[1]: Started cri-containerd-31fc935e20adbb06d6694ad9ad1044d9a7d51959c0fc34dc48e875f755f517e1.scope - libcontainer container 31fc935e20adbb06d6694ad9ad1044d9a7d51959c0fc34dc48e875f755f517e1. Dec 13 01:54:47.401553 kubelet[3338]: E1213 01:54:47.398327 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.401553 kubelet[3338]: E1213 01:54:47.398349 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.401553 kubelet[3338]: W1213 01:54:47.400997 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.401553 kubelet[3338]: E1213 01:54:47.401534 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.403768 kubelet[3338]: E1213 01:54:47.402556 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.403768 kubelet[3338]: W1213 01:54:47.402592 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.403768 kubelet[3338]: E1213 01:54:47.402625 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.442379 kubelet[3338]: E1213 01:54:47.440044 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:47.442379 kubelet[3338]: W1213 01:54:47.440090 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:47.442379 kubelet[3338]: E1213 01:54:47.440120 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:47.551640 containerd[2026]: time="2024-12-13T01:54:47.550428407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f8696967d-c84zf,Uid:71577281-4dfe-42f6-87b4-82e229c6bff3,Namespace:calico-system,Attempt:0,} returns sandbox id \"59d107f956c4d34d1f35d5e4cbd7d225f911379f958a4611b13e6dece85e49c6\"" Dec 13 01:54:47.560478 containerd[2026]: time="2024-12-13T01:54:47.560384039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:54:47.585600 containerd[2026]: time="2024-12-13T01:54:47.585476363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tdkfb,Uid:4bbe889b-dfef-446f-975a-a8889f4343d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"31fc935e20adbb06d6694ad9ad1044d9a7d51959c0fc34dc48e875f755f517e1\"" Dec 13 01:54:48.992961 kubelet[3338]: E1213 01:54:48.992496 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j259d" podUID="44b75188-75ae-44a3-965d-98692905f7b3" Dec 13 01:54:49.019960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989238885.mount: Deactivated successfully. Dec 13 01:54:50.537874 containerd[2026]: time="2024-12-13T01:54:50.537818821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:50.540617 containerd[2026]: time="2024-12-13T01:54:50.540547861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:54:50.541957 containerd[2026]: time="2024-12-13T01:54:50.541876657Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:50.545741 containerd[2026]: time="2024-12-13T01:54:50.545673301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:50.547700 containerd[2026]: time="2024-12-13T01:54:50.547412713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.986960742s" Dec 13 01:54:50.547700 containerd[2026]: time="2024-12-13T01:54:50.547468489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:54:50.550445 containerd[2026]: time="2024-12-13T01:54:50.549927541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:54:50.590455 containerd[2026]: time="2024-12-13T01:54:50.590370146Z" level=info msg="CreateContainer within sandbox \"59d107f956c4d34d1f35d5e4cbd7d225f911379f958a4611b13e6dece85e49c6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:54:50.615839 containerd[2026]: time="2024-12-13T01:54:50.615664874Z" level=info msg="CreateContainer within sandbox \"59d107f956c4d34d1f35d5e4cbd7d225f911379f958a4611b13e6dece85e49c6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"30d35f12393412da89fbe62c941b92c7abbd0bbedf2ef7788f13178aa931cf28\"" Dec 13 01:54:50.617265 containerd[2026]: time="2024-12-13T01:54:50.616860278Z" level=info msg="StartContainer for \"30d35f12393412da89fbe62c941b92c7abbd0bbedf2ef7788f13178aa931cf28\"" Dec 13 01:54:50.684895 systemd[1]: Started cri-containerd-30d35f12393412da89fbe62c941b92c7abbd0bbedf2ef7788f13178aa931cf28.scope - libcontainer container 30d35f12393412da89fbe62c941b92c7abbd0bbedf2ef7788f13178aa931cf28. Dec 13 01:54:50.756665 containerd[2026]: time="2024-12-13T01:54:50.756566162Z" level=info msg="StartContainer for \"30d35f12393412da89fbe62c941b92c7abbd0bbedf2ef7788f13178aa931cf28\" returns successfully" Dec 13 01:54:50.995895 kubelet[3338]: E1213 01:54:50.995804 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j259d" podUID="44b75188-75ae-44a3-965d-98692905f7b3" Dec 13 01:54:51.249846 kubelet[3338]: E1213 01:54:51.249390 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.249846 kubelet[3338]: W1213 01:54:51.249468 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.249846 kubelet[3338]: E1213 01:54:51.249727 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.251226 kubelet[3338]: E1213 01:54:51.251157 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.251485 kubelet[3338]: W1213 01:54:51.251305 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.251485 kubelet[3338]: E1213 01:54:51.251344 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.252619 kubelet[3338]: E1213 01:54:51.252340 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.252619 kubelet[3338]: W1213 01:54:51.252385 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.252619 kubelet[3338]: E1213 01:54:51.252415 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.253274 kubelet[3338]: E1213 01:54:51.253247 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.253501 kubelet[3338]: W1213 01:54:51.253327 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.253501 kubelet[3338]: E1213 01:54:51.253364 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.254576 kubelet[3338]: E1213 01:54:51.254292 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.254576 kubelet[3338]: W1213 01:54:51.254351 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.254576 kubelet[3338]: E1213 01:54:51.254385 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.256197 kubelet[3338]: E1213 01:54:51.255830 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.256197 kubelet[3338]: W1213 01:54:51.255888 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.256197 kubelet[3338]: E1213 01:54:51.255928 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.257622 kubelet[3338]: E1213 01:54:51.257450 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.257622 kubelet[3338]: W1213 01:54:51.257503 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.258007 kubelet[3338]: E1213 01:54:51.257943 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.258546 kubelet[3338]: E1213 01:54:51.258499 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.258673 kubelet[3338]: W1213 01:54:51.258557 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.258673 kubelet[3338]: E1213 01:54:51.258586 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.259171 kubelet[3338]: E1213 01:54:51.259098 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.259171 kubelet[3338]: W1213 01:54:51.259150 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.259411 kubelet[3338]: E1213 01:54:51.259191 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.262041 kubelet[3338]: E1213 01:54:51.260743 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.262041 kubelet[3338]: W1213 01:54:51.260804 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.262041 kubelet[3338]: E1213 01:54:51.260843 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.262041 kubelet[3338]: E1213 01:54:51.261417 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.262041 kubelet[3338]: W1213 01:54:51.261457 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.262041 kubelet[3338]: E1213 01:54:51.261681 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.262984 kubelet[3338]: E1213 01:54:51.262923 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.262984 kubelet[3338]: W1213 01:54:51.262967 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.263251 kubelet[3338]: E1213 01:54:51.263006 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.265641 kubelet[3338]: I1213 01:54:51.265487 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f8696967d-c84zf" podStartSLOduration=2.275792039 podStartE2EDuration="5.265462825s" podCreationTimestamp="2024-12-13 01:54:46 +0000 UTC" firstStartedPulling="2024-12-13 01:54:47.559794791 +0000 UTC m=+16.832378437" lastFinishedPulling="2024-12-13 01:54:50.549465577 +0000 UTC m=+19.822049223" observedRunningTime="2024-12-13 01:54:51.265191469 +0000 UTC m=+20.537775127" watchObservedRunningTime="2024-12-13 01:54:51.265462825 +0000 UTC m=+20.538046471" Dec 13 01:54:51.267509 kubelet[3338]: E1213 01:54:51.267322 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.267509 kubelet[3338]: W1213 01:54:51.267495 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.268135 kubelet[3338]: E1213 01:54:51.267665 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.269197 kubelet[3338]: E1213 01:54:51.269082 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.269197 kubelet[3338]: W1213 01:54:51.269121 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.269197 kubelet[3338]: E1213 01:54:51.269157 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.270105 kubelet[3338]: E1213 01:54:51.269804 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.270105 kubelet[3338]: W1213 01:54:51.269838 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.270105 kubelet[3338]: E1213 01:54:51.269907 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.300085 kubelet[3338]: E1213 01:54:51.299190 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.300085 kubelet[3338]: W1213 01:54:51.299267 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.300085 kubelet[3338]: E1213 01:54:51.299352 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.301333 kubelet[3338]: E1213 01:54:51.300963 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.301333 kubelet[3338]: W1213 01:54:51.300994 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.301333 kubelet[3338]: E1213 01:54:51.301037 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.302267 kubelet[3338]: E1213 01:54:51.302031 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.302814 kubelet[3338]: W1213 01:54:51.302055 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.302814 kubelet[3338]: E1213 01:54:51.302644 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.303994 kubelet[3338]: E1213 01:54:51.303726 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.303994 kubelet[3338]: W1213 01:54:51.303795 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.304601 kubelet[3338]: E1213 01:54:51.304255 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.304971 kubelet[3338]: E1213 01:54:51.304933 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.305240 kubelet[3338]: W1213 01:54:51.305098 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.305445 kubelet[3338]: E1213 01:54:51.305376 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.306345 kubelet[3338]: E1213 01:54:51.306143 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.306345 kubelet[3338]: W1213 01:54:51.306188 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.307082 kubelet[3338]: E1213 01:54:51.306816 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.308132 kubelet[3338]: E1213 01:54:51.307807 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.308132 kubelet[3338]: W1213 01:54:51.307849 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.308762 kubelet[3338]: E1213 01:54:51.308628 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.309083 kubelet[3338]: E1213 01:54:51.309054 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.309414 kubelet[3338]: W1213 01:54:51.309221 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.309625 kubelet[3338]: E1213 01:54:51.309598 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.311067 kubelet[3338]: E1213 01:54:51.310491 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.311067 kubelet[3338]: W1213 01:54:51.310594 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.311067 kubelet[3338]: E1213 01:54:51.310799 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.311492 kubelet[3338]: E1213 01:54:51.311455 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.311678 kubelet[3338]: W1213 01:54:51.311574 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.311751 kubelet[3338]: E1213 01:54:51.311672 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.312783 kubelet[3338]: E1213 01:54:51.312655 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.313084 kubelet[3338]: W1213 01:54:51.312808 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.313084 kubelet[3338]: E1213 01:54:51.312885 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.313335 kubelet[3338]: E1213 01:54:51.313286 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.313335 kubelet[3338]: W1213 01:54:51.313309 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.313335 kubelet[3338]: E1213 01:54:51.313376 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.314040 kubelet[3338]: E1213 01:54:51.313759 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.314040 kubelet[3338]: W1213 01:54:51.313780 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.314159 kubelet[3338]: E1213 01:54:51.314075 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.315428 kubelet[3338]: E1213 01:54:51.315260 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.315428 kubelet[3338]: W1213 01:54:51.315297 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.316169 kubelet[3338]: E1213 01:54:51.315673 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.317701 kubelet[3338]: E1213 01:54:51.316927 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.317701 kubelet[3338]: W1213 01:54:51.316963 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.317701 kubelet[3338]: E1213 01:54:51.316998 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.318875 kubelet[3338]: E1213 01:54:51.318832 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.319235 kubelet[3338]: W1213 01:54:51.319200 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.319565 kubelet[3338]: E1213 01:54:51.319506 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.321295 kubelet[3338]: E1213 01:54:51.320751 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.321295 kubelet[3338]: W1213 01:54:51.320788 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.321295 kubelet[3338]: E1213 01:54:51.320824 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.322692 kubelet[3338]: E1213 01:54:51.321556 3338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.322692 kubelet[3338]: W1213 01:54:51.321579 3338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.322692 kubelet[3338]: E1213 01:54:51.321609 3338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.002600 containerd[2026]: time="2024-12-13T01:54:52.001931617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:52.004380 containerd[2026]: time="2024-12-13T01:54:52.004293913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:54:52.006793 containerd[2026]: time="2024-12-13T01:54:52.006716425Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:52.014605 containerd[2026]: time="2024-12-13T01:54:52.014186713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:52.016246 containerd[2026]: time="2024-12-13T01:54:52.016094917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.466108912s" Dec 13 01:54:52.016472 containerd[2026]: time="2024-12-13T01:54:52.016437505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:54:52.022030 containerd[2026]: time="2024-12-13T01:54:52.021941425Z" level=info msg="CreateContainer within sandbox \"31fc935e20adbb06d6694ad9ad1044d9a7d51959c0fc34dc48e875f755f517e1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:54:52.053961 containerd[2026]: time="2024-12-13T01:54:52.053834185Z" level=info msg="CreateContainer within sandbox \"31fc935e20adbb06d6694ad9ad1044d9a7d51959c0fc34dc48e875f755f517e1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7\"" Dec 13 01:54:52.055020 containerd[2026]: time="2024-12-13T01:54:52.054955501Z" level=info msg="StartContainer for \"3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7\"" Dec 13 01:54:52.115959 systemd[1]: run-containerd-runc-k8s.io-3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7-runc.qnhhjI.mount: Deactivated successfully. Dec 13 01:54:52.128862 systemd[1]: Started cri-containerd-3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7.scope - libcontainer container 3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7. Dec 13 01:54:52.186711 containerd[2026]: time="2024-12-13T01:54:52.186611186Z" level=info msg="StartContainer for \"3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7\" returns successfully" Dec 13 01:54:52.218421 systemd[1]: cri-containerd-3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7.scope: Deactivated successfully. Dec 13 01:54:52.252952 kubelet[3338]: I1213 01:54:52.252107 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:54:52.294258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7-rootfs.mount: Deactivated successfully. Dec 13 01:54:52.533138 containerd[2026]: time="2024-12-13T01:54:52.531993759Z" level=info msg="shim disconnected" id=3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7 namespace=k8s.io Dec 13 01:54:52.533138 containerd[2026]: time="2024-12-13T01:54:52.532073163Z" level=warning msg="cleaning up after shim disconnected" id=3a506efaf7c9c90152a7536ae14599de87bd8fe3f3852178678524e8601ba8d7 namespace=k8s.io Dec 13 01:54:52.533138 containerd[2026]: time="2024-12-13T01:54:52.532096179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:54:52.992781 kubelet[3338]: E1213 01:54:52.992710 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j259d" podUID="44b75188-75ae-44a3-965d-98692905f7b3" Dec 13 01:54:53.262483 containerd[2026]: time="2024-12-13T01:54:53.261915207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:54:54.994245 kubelet[3338]: E1213 01:54:54.993271 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j259d" podUID="44b75188-75ae-44a3-965d-98692905f7b3" Dec 13 01:54:56.993664 kubelet[3338]: E1213 01:54:56.993061 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j259d" podUID="44b75188-75ae-44a3-965d-98692905f7b3" Dec 13 01:54:57.952280 containerd[2026]: time="2024-12-13T01:54:57.952156042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:57.954379 containerd[2026]: time="2024-12-13T01:54:57.954160990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:54:57.954379 containerd[2026]: time="2024-12-13T01:54:57.954298642Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:57.958971 containerd[2026]: time="2024-12-13T01:54:57.958837522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:57.961951 containerd[2026]: time="2024-12-13T01:54:57.961681006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.699703759s" Dec 13 01:54:57.961951 containerd[2026]: time="2024-12-13T01:54:57.961747942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:54:57.967624 containerd[2026]: time="2024-12-13T01:54:57.967555306Z" level=info msg="CreateContainer within sandbox \"31fc935e20adbb06d6694ad9ad1044d9a7d51959c0fc34dc48e875f755f517e1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:54:57.992730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165641383.mount: Deactivated successfully. Dec 13 01:54:57.995016 containerd[2026]: time="2024-12-13T01:54:57.993897346Z" level=info msg="CreateContainer within sandbox \"31fc935e20adbb06d6694ad9ad1044d9a7d51959c0fc34dc48e875f755f517e1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ee8b87773b61a8c1efef2a6e2242905d6067e9c81ec94f0fe8f31c38f11f6af3\"" Dec 13 01:54:57.996842 containerd[2026]: time="2024-12-13T01:54:57.995700118Z" level=info msg="StartContainer for \"ee8b87773b61a8c1efef2a6e2242905d6067e9c81ec94f0fe8f31c38f11f6af3\"" Dec 13 01:54:58.066161 systemd[1]: Started cri-containerd-ee8b87773b61a8c1efef2a6e2242905d6067e9c81ec94f0fe8f31c38f11f6af3.scope - libcontainer container ee8b87773b61a8c1efef2a6e2242905d6067e9c81ec94f0fe8f31c38f11f6af3. Dec 13 01:54:58.119351 containerd[2026]: time="2024-12-13T01:54:58.119239087Z" level=info msg="StartContainer for \"ee8b87773b61a8c1efef2a6e2242905d6067e9c81ec94f0fe8f31c38f11f6af3\" returns successfully" Dec 13 01:54:58.996623 kubelet[3338]: E1213 01:54:58.995975 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j259d" podUID="44b75188-75ae-44a3-965d-98692905f7b3" Dec 13 01:54:59.220357 containerd[2026]: time="2024-12-13T01:54:59.220271372Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:54:59.224507 systemd[1]: cri-containerd-ee8b87773b61a8c1efef2a6e2242905d6067e9c81ec94f0fe8f31c38f11f6af3.scope: Deactivated successfully. Dec 13 01:54:59.266862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee8b87773b61a8c1efef2a6e2242905d6067e9c81ec94f0fe8f31c38f11f6af3-rootfs.mount: Deactivated successfully. Dec 13 01:54:59.319744 kubelet[3338]: I1213 01:54:59.319361 3338 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:54:59.403189 systemd[1]: Created slice kubepods-burstable-poda10cc81f_5a1f_43e0_b9b3_7b3335bef263.slice - libcontainer container kubepods-burstable-poda10cc81f_5a1f_43e0_b9b3_7b3335bef263.slice. Dec 13 01:54:59.443658 systemd[1]: Created slice kubepods-burstable-pod17858bd8_f036_4bd4_834e_afda00a53d7c.slice - libcontainer container kubepods-burstable-pod17858bd8_f036_4bd4_834e_afda00a53d7c.slice. Dec 13 01:54:59.462699 systemd[1]: Created slice kubepods-besteffort-pod98155941_7b7d_48f1_80f8_b0abe7a2cd77.slice - libcontainer container kubepods-besteffort-pod98155941_7b7d_48f1_80f8_b0abe7a2cd77.slice. Dec 13 01:54:59.466561 kubelet[3338]: W1213 01:54:59.466441 3338 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-221" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-19-221' and this object Dec 13 01:54:59.467556 kubelet[3338]: E1213 01:54:59.466906 3338 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-19-221\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-19-221' and this object" logger="UnhandledError" Dec 13 01:54:59.467556 kubelet[3338]: W1213 01:54:59.467083 3338 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-19-221" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-19-221' and this object Dec 13 01:54:59.467556 kubelet[3338]: E1213 01:54:59.467115 3338 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ip-172-31-19-221\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-19-221' and this object" logger="UnhandledError" Dec 13 01:54:59.488136 systemd[1]: Created slice kubepods-besteffort-pod40ef9bd9_93b0_4b2f_96bf_8dd3e39ec969.slice - libcontainer container kubepods-besteffort-pod40ef9bd9_93b0_4b2f_96bf_8dd3e39ec969.slice. Dec 13 01:54:59.507693 systemd[1]: Created slice kubepods-besteffort-pod46e827e0_09e5_4459_853e_fe2f3072d9fa.slice - libcontainer container kubepods-besteffort-pod46e827e0_09e5_4459_853e_fe2f3072d9fa.slice. Dec 13 01:54:59.566623 kubelet[3338]: I1213 01:54:59.566361 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgvdb\" (UniqueName: \"kubernetes.io/projected/46e827e0-09e5-4459-853e-fe2f3072d9fa-kube-api-access-tgvdb\") pod \"calico-apiserver-59b7bf6bf4-wfgbb\" (UID: \"46e827e0-09e5-4459-853e-fe2f3072d9fa\") " pod="calico-apiserver/calico-apiserver-59b7bf6bf4-wfgbb" Dec 13 01:54:59.566623 kubelet[3338]: I1213 01:54:59.566452 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vtt\" (UniqueName: \"kubernetes.io/projected/a10cc81f-5a1f-43e0-b9b3-7b3335bef263-kube-api-access-77vtt\") pod \"coredns-6f6b679f8f-t284s\" (UID: \"a10cc81f-5a1f-43e0-b9b3-7b3335bef263\") " pod="kube-system/coredns-6f6b679f8f-t284s" Dec 13 01:54:59.566623 kubelet[3338]: I1213 01:54:59.566494 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17858bd8-f036-4bd4-834e-afda00a53d7c-config-volume\") pod \"coredns-6f6b679f8f-fdk95\" (UID: \"17858bd8-f036-4bd4-834e-afda00a53d7c\") " pod="kube-system/coredns-6f6b679f8f-fdk95" Dec 13 01:54:59.569239 kubelet[3338]: I1213 01:54:59.569163 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969-calico-apiserver-certs\") pod \"calico-apiserver-59b7bf6bf4-l7swh\" (UID: \"40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969\") " pod="calico-apiserver/calico-apiserver-59b7bf6bf4-l7swh" Dec 13 01:54:59.569558 kubelet[3338]: I1213 01:54:59.569260 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98155941-7b7d-48f1-80f8-b0abe7a2cd77-tigera-ca-bundle\") pod \"calico-kube-controllers-9f6ff568d-sz27l\" (UID: \"98155941-7b7d-48f1-80f8-b0abe7a2cd77\") " pod="calico-system/calico-kube-controllers-9f6ff568d-sz27l" Dec 13 01:54:59.569558 kubelet[3338]: I1213 01:54:59.569302 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46e827e0-09e5-4459-853e-fe2f3072d9fa-calico-apiserver-certs\") pod \"calico-apiserver-59b7bf6bf4-wfgbb\" (UID: \"46e827e0-09e5-4459-853e-fe2f3072d9fa\") " pod="calico-apiserver/calico-apiserver-59b7bf6bf4-wfgbb" Dec 13 01:54:59.569558 kubelet[3338]: I1213 01:54:59.569358 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmccb\" (UniqueName: \"kubernetes.io/projected/17858bd8-f036-4bd4-834e-afda00a53d7c-kube-api-access-zmccb\") pod \"coredns-6f6b679f8f-fdk95\" (UID: \"17858bd8-f036-4bd4-834e-afda00a53d7c\") " pod="kube-system/coredns-6f6b679f8f-fdk95" Dec 13 01:54:59.569558 kubelet[3338]: I1213 01:54:59.569403 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44tb\" (UniqueName: \"kubernetes.io/projected/40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969-kube-api-access-x44tb\") pod \"calico-apiserver-59b7bf6bf4-l7swh\" (UID: \"40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969\") " pod="calico-apiserver/calico-apiserver-59b7bf6bf4-l7swh" Dec 13 01:54:59.569558 kubelet[3338]: I1213 01:54:59.569450 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10cc81f-5a1f-43e0-b9b3-7b3335bef263-config-volume\") pod \"coredns-6f6b679f8f-t284s\" (UID: \"a10cc81f-5a1f-43e0-b9b3-7b3335bef263\") " pod="kube-system/coredns-6f6b679f8f-t284s" Dec 13 01:54:59.569866 kubelet[3338]: I1213 01:54:59.569489 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljwq\" (UniqueName: \"kubernetes.io/projected/98155941-7b7d-48f1-80f8-b0abe7a2cd77-kube-api-access-vljwq\") pod \"calico-kube-controllers-9f6ff568d-sz27l\" (UID: \"98155941-7b7d-48f1-80f8-b0abe7a2cd77\") " pod="calico-system/calico-kube-controllers-9f6ff568d-sz27l" Dec 13 01:54:59.732206 containerd[2026]: time="2024-12-13T01:54:59.730063547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t284s,Uid:a10cc81f-5a1f-43e0-b9b3-7b3335bef263,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:59.753362 containerd[2026]: time="2024-12-13T01:54:59.753285515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fdk95,Uid:17858bd8-f036-4bd4-834e-afda00a53d7c,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:59.777824 containerd[2026]: time="2024-12-13T01:54:59.777484511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f6ff568d-sz27l,Uid:98155941-7b7d-48f1-80f8-b0abe7a2cd77,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:59.962738 containerd[2026]: time="2024-12-13T01:54:59.962443728Z" level=info msg="shim disconnected" id=ee8b87773b61a8c1efef2a6e2242905d6067e9c81ec94f0fe8f31c38f11f6af3 namespace=k8s.io Dec 13 01:54:59.963132 containerd[2026]: time="2024-12-13T01:54:59.962711388Z" level=warning msg="cleaning up after shim disconnected" id=ee8b87773b61a8c1efef2a6e2242905d6067e9c81ec94f0fe8f31c38f11f6af3 namespace=k8s.io Dec 13 01:54:59.963132 containerd[2026]: time="2024-12-13T01:54:59.962774376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:00.195468 containerd[2026]: time="2024-12-13T01:55:00.195373101Z" level=error msg="Failed to destroy network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.197368 containerd[2026]: time="2024-12-13T01:55:00.197140821Z" level=error msg="encountered an error cleaning up failed sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.198667 containerd[2026]: time="2024-12-13T01:55:00.197695077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fdk95,Uid:17858bd8-f036-4bd4-834e-afda00a53d7c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.199191 kubelet[3338]: E1213 01:55:00.198831 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.199191 kubelet[3338]: E1213 01:55:00.198961 3338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fdk95" Dec 13 01:55:00.199191 kubelet[3338]: E1213 01:55:00.198999 3338 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fdk95" Dec 13 01:55:00.202174 kubelet[3338]: E1213 01:55:00.199084 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fdk95_kube-system(17858bd8-f036-4bd4-834e-afda00a53d7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fdk95_kube-system(17858bd8-f036-4bd4-834e-afda00a53d7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fdk95" podUID="17858bd8-f036-4bd4-834e-afda00a53d7c" Dec 13 01:55:00.204407 containerd[2026]: time="2024-12-13T01:55:00.203829465Z" level=error msg="Failed to destroy network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.205604 containerd[2026]: time="2024-12-13T01:55:00.205219941Z" level=error msg="encountered an error cleaning up failed sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.205604 containerd[2026]: time="2024-12-13T01:55:00.205316661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f6ff568d-sz27l,Uid:98155941-7b7d-48f1-80f8-b0abe7a2cd77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.205813 kubelet[3338]: E1213 01:55:00.205628 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.205813 kubelet[3338]: E1213 01:55:00.205705 3338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9f6ff568d-sz27l" Dec 13 01:55:00.205813 kubelet[3338]: E1213 01:55:00.205737 3338 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9f6ff568d-sz27l" Dec 13 01:55:00.206011 kubelet[3338]: E1213 01:55:00.205806 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9f6ff568d-sz27l_calico-system(98155941-7b7d-48f1-80f8-b0abe7a2cd77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9f6ff568d-sz27l_calico-system(98155941-7b7d-48f1-80f8-b0abe7a2cd77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9f6ff568d-sz27l" podUID="98155941-7b7d-48f1-80f8-b0abe7a2cd77" Dec 13 01:55:00.211705 containerd[2026]: time="2024-12-13T01:55:00.211589565Z" level=error msg="Failed to destroy network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.214597 containerd[2026]: time="2024-12-13T01:55:00.212931297Z" level=error msg="encountered an error cleaning up failed sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.214597 containerd[2026]: time="2024-12-13T01:55:00.213123717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t284s,Uid:a10cc81f-5a1f-43e0-b9b3-7b3335bef263,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.214912 kubelet[3338]: E1213 01:55:00.214294 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.214912 kubelet[3338]: E1213 01:55:00.214373 3338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t284s" Dec 13 01:55:00.214912 kubelet[3338]: E1213 01:55:00.214407 3338 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t284s" Dec 13 01:55:00.215098 kubelet[3338]: E1213 01:55:00.214474 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t284s_kube-system(a10cc81f-5a1f-43e0-b9b3-7b3335bef263)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t284s_kube-system(a10cc81f-5a1f-43e0-b9b3-7b3335bef263)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t284s" podUID="a10cc81f-5a1f-43e0-b9b3-7b3335bef263" Dec 13 01:55:00.269744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062-shm.mount: Deactivated successfully. Dec 13 01:55:00.323114 kubelet[3338]: I1213 01:55:00.319823 3338 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:00.323295 containerd[2026]: time="2024-12-13T01:55:00.322350790Z" level=info msg="StopPodSandbox for \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\"" Dec 13 01:55:00.323295 containerd[2026]: time="2024-12-13T01:55:00.322488538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:55:00.327758 containerd[2026]: time="2024-12-13T01:55:00.324809038Z" level=info msg="Ensure that sandbox cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062 in task-service has been cleanup successfully" Dec 13 01:55:00.336493 kubelet[3338]: I1213 01:55:00.336296 3338 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:00.341566 containerd[2026]: time="2024-12-13T01:55:00.338938234Z" level=info msg="StopPodSandbox for \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\"" Dec 13 01:55:00.341566 containerd[2026]: time="2024-12-13T01:55:00.340417654Z" level=info msg="Ensure that sandbox d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43 in task-service has been cleanup successfully" Dec 13 01:55:00.351646 kubelet[3338]: I1213 01:55:00.351598 3338 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:00.355883 containerd[2026]: time="2024-12-13T01:55:00.355817302Z" level=info msg="StopPodSandbox for \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\"" Dec 13 01:55:00.357613 containerd[2026]: time="2024-12-13T01:55:00.357330370Z" level=info msg="Ensure that sandbox 6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0 in task-service has been cleanup successfully" Dec 13 01:55:00.427022 containerd[2026]: time="2024-12-13T01:55:00.426860422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b7bf6bf4-wfgbb,Uid:46e827e0-09e5-4459-853e-fe2f3072d9fa,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:55:00.547584 containerd[2026]: time="2024-12-13T01:55:00.544775387Z" level=error msg="StopPodSandbox for \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\" failed" error="failed to destroy network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.547787 kubelet[3338]: E1213 01:55:00.546582 3338 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:00.547787 kubelet[3338]: E1213 01:55:00.546694 3338 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43"} Dec 13 01:55:00.547787 kubelet[3338]: E1213 01:55:00.546797 3338 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98155941-7b7d-48f1-80f8-b0abe7a2cd77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:00.548261 kubelet[3338]: E1213 01:55:00.546837 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98155941-7b7d-48f1-80f8-b0abe7a2cd77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9f6ff568d-sz27l" podUID="98155941-7b7d-48f1-80f8-b0abe7a2cd77" Dec 13 01:55:00.558661 containerd[2026]: time="2024-12-13T01:55:00.558496439Z" level=error msg="StopPodSandbox for \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\" failed" error="failed to destroy network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.559987 kubelet[3338]: E1213 01:55:00.559087 3338 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:00.559987 kubelet[3338]: E1213 01:55:00.559233 3338 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0"} Dec 13 01:55:00.559987 kubelet[3338]: E1213 01:55:00.559370 3338 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17858bd8-f036-4bd4-834e-afda00a53d7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:00.559987 kubelet[3338]: E1213 01:55:00.559444 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17858bd8-f036-4bd4-834e-afda00a53d7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fdk95" podUID="17858bd8-f036-4bd4-834e-afda00a53d7c" Dec 13 01:55:00.570165 containerd[2026]: time="2024-12-13T01:55:00.567992279Z" level=error msg="StopPodSandbox for \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\" failed" error="failed to destroy network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.570481 kubelet[3338]: E1213 01:55:00.568579 3338 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:00.570481 kubelet[3338]: E1213 01:55:00.568648 3338 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062"} Dec 13 01:55:00.570481 kubelet[3338]: E1213 01:55:00.568707 3338 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a10cc81f-5a1f-43e0-b9b3-7b3335bef263\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:00.570481 kubelet[3338]: E1213 01:55:00.568752 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a10cc81f-5a1f-43e0-b9b3-7b3335bef263\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t284s" podUID="a10cc81f-5a1f-43e0-b9b3-7b3335bef263" Dec 13 01:55:00.632777 containerd[2026]: time="2024-12-13T01:55:00.632619864Z" level=error msg="Failed to destroy network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.633238 containerd[2026]: time="2024-12-13T01:55:00.633180924Z" level=error msg="encountered an error cleaning up failed sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.633308 containerd[2026]: time="2024-12-13T01:55:00.633266676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b7bf6bf4-wfgbb,Uid:46e827e0-09e5-4459-853e-fe2f3072d9fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.633669 kubelet[3338]: E1213 01:55:00.633595 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.633821 kubelet[3338]: E1213 01:55:00.633695 3338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-wfgbb" Dec 13 01:55:00.633821 kubelet[3338]: E1213 01:55:00.633732 3338 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-wfgbb" Dec 13 01:55:00.634080 kubelet[3338]: E1213 01:55:00.633820 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59b7bf6bf4-wfgbb_calico-apiserver(46e827e0-09e5-4459-853e-fe2f3072d9fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59b7bf6bf4-wfgbb_calico-apiserver(46e827e0-09e5-4459-853e-fe2f3072d9fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-wfgbb" podUID="46e827e0-09e5-4459-853e-fe2f3072d9fa" Dec 13 01:55:00.709915 containerd[2026]: time="2024-12-13T01:55:00.709797276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b7bf6bf4-l7swh,Uid:40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:55:00.818054 containerd[2026]: time="2024-12-13T01:55:00.817859616Z" level=error msg="Failed to destroy network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.818564 containerd[2026]: time="2024-12-13T01:55:00.818407200Z" level=error msg="encountered an error cleaning up failed sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.818668 containerd[2026]: time="2024-12-13T01:55:00.818551596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b7bf6bf4-l7swh,Uid:40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.819598 kubelet[3338]: E1213 01:55:00.818943 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:00.819598 kubelet[3338]: E1213 01:55:00.819052 3338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-l7swh" Dec 13 01:55:00.819598 kubelet[3338]: E1213 01:55:00.819087 3338 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-l7swh" Dec 13 01:55:00.819866 kubelet[3338]: E1213 01:55:00.819153 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59b7bf6bf4-l7swh_calico-apiserver(40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59b7bf6bf4-l7swh_calico-apiserver(40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-l7swh" podUID="40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969" Dec 13 01:55:01.007353 systemd[1]: Created slice kubepods-besteffort-pod44b75188_75ae_44a3_965d_98692905f7b3.slice - libcontainer container kubepods-besteffort-pod44b75188_75ae_44a3_965d_98692905f7b3.slice. Dec 13 01:55:01.012399 containerd[2026]: time="2024-12-13T01:55:01.012343773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j259d,Uid:44b75188-75ae-44a3-965d-98692905f7b3,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:01.118615 containerd[2026]: time="2024-12-13T01:55:01.118323646Z" level=error msg="Failed to destroy network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.119150 containerd[2026]: time="2024-12-13T01:55:01.119070598Z" level=error msg="encountered an error cleaning up failed sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.119381 containerd[2026]: time="2024-12-13T01:55:01.119317246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j259d,Uid:44b75188-75ae-44a3-965d-98692905f7b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.120219 kubelet[3338]: E1213 01:55:01.119800 3338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.120219 kubelet[3338]: E1213 01:55:01.119881 3338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j259d" Dec 13 01:55:01.120219 kubelet[3338]: E1213 01:55:01.119919 3338 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j259d" Dec 13 01:55:01.120872 kubelet[3338]: E1213 01:55:01.119991 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j259d_calico-system(44b75188-75ae-44a3-965d-98692905f7b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j259d_calico-system(44b75188-75ae-44a3-965d-98692905f7b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j259d" podUID="44b75188-75ae-44a3-965d-98692905f7b3" Dec 13 01:55:01.360546 kubelet[3338]: I1213 01:55:01.358368 3338 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:01.363952 containerd[2026]: time="2024-12-13T01:55:01.362854751Z" level=info msg="StopPodSandbox for \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\"" Dec 13 01:55:01.363952 containerd[2026]: time="2024-12-13T01:55:01.363204983Z" level=info msg="Ensure that sandbox 9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64 in task-service has been cleanup successfully" Dec 13 01:55:01.374042 kubelet[3338]: I1213 01:55:01.372054 3338 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:01.380712 containerd[2026]: time="2024-12-13T01:55:01.377392043Z" level=info msg="StopPodSandbox for \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\"" Dec 13 01:55:01.385501 containerd[2026]: time="2024-12-13T01:55:01.384889391Z" level=info msg="Ensure that sandbox 25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99 in task-service has been cleanup successfully" Dec 13 01:55:01.393463 kubelet[3338]: I1213 01:55:01.393395 3338 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:01.399851 containerd[2026]: time="2024-12-13T01:55:01.399490103Z" level=info msg="StopPodSandbox for \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\"" Dec 13 01:55:01.400952 containerd[2026]: time="2024-12-13T01:55:01.400862303Z" level=info msg="Ensure that sandbox 7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660 in task-service has been cleanup successfully" Dec 13 01:55:01.474466 containerd[2026]: time="2024-12-13T01:55:01.474398280Z" level=error msg="StopPodSandbox for \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\" failed" error="failed to destroy network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.475032 kubelet[3338]: E1213 01:55:01.474977 3338 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:01.475311 kubelet[3338]: E1213 01:55:01.475271 3338 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64"} Dec 13 01:55:01.475573 kubelet[3338]: E1213 01:55:01.475429 3338 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44b75188-75ae-44a3-965d-98692905f7b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:01.475573 kubelet[3338]: E1213 01:55:01.475478 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44b75188-75ae-44a3-965d-98692905f7b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j259d" podUID="44b75188-75ae-44a3-965d-98692905f7b3" Dec 13 01:55:01.481445 containerd[2026]: time="2024-12-13T01:55:01.481345392Z" level=error msg="StopPodSandbox for \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\" failed" error="failed to destroy network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.482486 kubelet[3338]: E1213 01:55:01.482433 3338 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:01.482745 kubelet[3338]: E1213 01:55:01.482710 3338 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660"} Dec 13 01:55:01.482897 kubelet[3338]: E1213 01:55:01.482869 3338 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46e827e0-09e5-4459-853e-fe2f3072d9fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:01.483238 kubelet[3338]: E1213 01:55:01.483071 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46e827e0-09e5-4459-853e-fe2f3072d9fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-wfgbb" podUID="46e827e0-09e5-4459-853e-fe2f3072d9fa" Dec 13 01:55:01.496667 containerd[2026]: time="2024-12-13T01:55:01.496583268Z" level=error msg="StopPodSandbox for \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\" failed" error="failed to destroy network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.497000 kubelet[3338]: E1213 01:55:01.496936 3338 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:01.497099 kubelet[3338]: E1213 01:55:01.497019 3338 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99"} Dec 13 01:55:01.497099 kubelet[3338]: E1213 01:55:01.497074 3338 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:01.497390 kubelet[3338]: E1213 01:55:01.497114 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-l7swh" podUID="40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969" Dec 13 01:55:08.737438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327060215.mount: Deactivated successfully. Dec 13 01:55:08.811340 containerd[2026]: time="2024-12-13T01:55:08.811248020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:08.813143 containerd[2026]: time="2024-12-13T01:55:08.813049460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:55:08.815651 containerd[2026]: time="2024-12-13T01:55:08.815550344Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:08.820615 containerd[2026]: time="2024-12-13T01:55:08.820451804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:08.822117 containerd[2026]: time="2024-12-13T01:55:08.821831288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 8.499149322s" Dec 13 01:55:08.822117 containerd[2026]: time="2024-12-13T01:55:08.821896052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:55:08.859747 containerd[2026]: time="2024-12-13T01:55:08.859408592Z" level=info msg="CreateContainer within sandbox \"31fc935e20adbb06d6694ad9ad1044d9a7d51959c0fc34dc48e875f755f517e1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:55:08.907967 containerd[2026]: time="2024-12-13T01:55:08.907892373Z" level=info msg="CreateContainer within sandbox \"31fc935e20adbb06d6694ad9ad1044d9a7d51959c0fc34dc48e875f755f517e1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0aa350ce0d54d60837e2e1ea5ab4cc052ec9d5199f67093c9e012f8f07af4072\"" Dec 13 01:55:08.911383 containerd[2026]: time="2024-12-13T01:55:08.910510749Z" level=info msg="StartContainer for \"0aa350ce0d54d60837e2e1ea5ab4cc052ec9d5199f67093c9e012f8f07af4072\"" Dec 13 01:55:08.959417 systemd[1]: Started cri-containerd-0aa350ce0d54d60837e2e1ea5ab4cc052ec9d5199f67093c9e012f8f07af4072.scope - libcontainer container 0aa350ce0d54d60837e2e1ea5ab4cc052ec9d5199f67093c9e012f8f07af4072. Dec 13 01:55:09.031018 containerd[2026]: time="2024-12-13T01:55:09.030724805Z" level=info msg="StartContainer for \"0aa350ce0d54d60837e2e1ea5ab4cc052ec9d5199f67093c9e012f8f07af4072\" returns successfully" Dec 13 01:55:09.155605 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:55:09.156564 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:55:09.485579 kubelet[3338]: I1213 01:55:09.485350 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tdkfb" podStartSLOduration=2.254136346 podStartE2EDuration="23.485320987s" podCreationTimestamp="2024-12-13 01:54:46 +0000 UTC" firstStartedPulling="2024-12-13 01:54:47.592832423 +0000 UTC m=+16.865416057" lastFinishedPulling="2024-12-13 01:55:08.824017052 +0000 UTC m=+38.096600698" observedRunningTime="2024-12-13 01:55:09.482821819 +0000 UTC m=+38.755405453" watchObservedRunningTime="2024-12-13 01:55:09.485320987 +0000 UTC m=+38.757904645" Dec 13 01:55:09.658746 kubelet[3338]: I1213 01:55:09.658029 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:11.521608 kernel: bpftool[4689]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:55:11.835036 (udev-worker)[4495]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:11.846296 systemd-networkd[1947]: vxlan.calico: Link UP Dec 13 01:55:11.846319 systemd-networkd[1947]: vxlan.calico: Gained carrier Dec 13 01:55:11.883423 (udev-worker)[4492]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:11.995045 containerd[2026]: time="2024-12-13T01:55:11.994141788Z" level=info msg="StopPodSandbox for \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\"" Dec 13 01:55:11.996786 containerd[2026]: time="2024-12-13T01:55:11.995696304Z" level=info msg="StopPodSandbox for \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\"" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.211 [INFO][4752] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.212 [INFO][4752] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" iface="eth0" netns="/var/run/netns/cni-05815253-509d-4c72-16e5-088817f680f0" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.218 [INFO][4752] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" iface="eth0" netns="/var/run/netns/cni-05815253-509d-4c72-16e5-088817f680f0" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.223 [INFO][4752] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" iface="eth0" netns="/var/run/netns/cni-05815253-509d-4c72-16e5-088817f680f0" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.223 [INFO][4752] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.224 [INFO][4752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.292 [INFO][4767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" HandleID="k8s-pod-network.7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.292 [INFO][4767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.293 [INFO][4767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.323 [WARNING][4767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" HandleID="k8s-pod-network.7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.323 [INFO][4767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" HandleID="k8s-pod-network.7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.327 [INFO][4767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:12.338153 containerd[2026]: 2024-12-13 01:55:12.332 [INFO][4752] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:12.344582 containerd[2026]: time="2024-12-13T01:55:12.342391294Z" level=info msg="TearDown network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\" successfully" Dec 13 01:55:12.344582 containerd[2026]: time="2024-12-13T01:55:12.342448474Z" level=info msg="StopPodSandbox for \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\" returns successfully" Dec 13 01:55:12.350348 containerd[2026]: time="2024-12-13T01:55:12.345849610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b7bf6bf4-wfgbb,Uid:46e827e0-09e5-4459-853e-fe2f3072d9fa,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:55:12.348612 systemd[1]: run-netns-cni\x2d05815253\x2d509d\x2d4c72\x2d16e5\x2d088817f680f0.mount: Deactivated successfully. Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.219 [INFO][4753] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.219 [INFO][4753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" iface="eth0" netns="/var/run/netns/cni-48604f60-41da-c16c-de2b-59d30cabc333" Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.221 [INFO][4753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" iface="eth0" netns="/var/run/netns/cni-48604f60-41da-c16c-de2b-59d30cabc333" Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.223 [INFO][4753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" iface="eth0" netns="/var/run/netns/cni-48604f60-41da-c16c-de2b-59d30cabc333" Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.223 [INFO][4753] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.223 [INFO][4753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.308 [INFO][4766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" HandleID="k8s-pod-network.25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.309 [INFO][4766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.327 [INFO][4766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.359 [WARNING][4766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" HandleID="k8s-pod-network.25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.359 [INFO][4766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" HandleID="k8s-pod-network.25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.370 [INFO][4766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:12.380143 containerd[2026]: 2024-12-13 01:55:12.374 [INFO][4753] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:12.383235 containerd[2026]: time="2024-12-13T01:55:12.381410542Z" level=info msg="TearDown network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\" successfully" Dec 13 01:55:12.383235 containerd[2026]: time="2024-12-13T01:55:12.381459502Z" level=info msg="StopPodSandbox for \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\" returns successfully" Dec 13 01:55:12.386963 containerd[2026]: time="2024-12-13T01:55:12.383775010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b7bf6bf4-l7swh,Uid:40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:55:12.389221 systemd[1]: run-netns-cni\x2d48604f60\x2d41da\x2dc16c\x2dde2b\x2d59d30cabc333.mount: Deactivated successfully. Dec 13 01:55:12.820973 (udev-worker)[4725]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:12.824875 systemd-networkd[1947]: cali5facccc4bbd: Link UP Dec 13 01:55:12.828392 systemd-networkd[1947]: cali5facccc4bbd: Gained carrier Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.550 [INFO][4795] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0 calico-apiserver-59b7bf6bf4- calico-apiserver 46e827e0-09e5-4459-853e-fe2f3072d9fa 772 0 2024-12-13 01:54:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59b7bf6bf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-221 calico-apiserver-59b7bf6bf4-wfgbb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5facccc4bbd [] []}} ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-wfgbb" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.558 [INFO][4795] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-wfgbb" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.719 [INFO][4826] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" HandleID="k8s-pod-network.c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.744 [INFO][4826] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" HandleID="k8s-pod-network.c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001ceb30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-221", "pod":"calico-apiserver-59b7bf6bf4-wfgbb", "timestamp":"2024-12-13 01:55:12.719281908 +0000 UTC"}, Hostname:"ip-172-31-19-221", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.745 [INFO][4826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.745 [INFO][4826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.745 [INFO][4826] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-221' Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.752 [INFO][4826] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" host="ip-172-31-19-221" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.762 [INFO][4826] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-221" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.773 [INFO][4826] ipam/ipam.go 489: Trying affinity for 192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.778 [INFO][4826] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.783 [INFO][4826] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.783 [INFO][4826] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" host="ip-172-31-19-221" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.787 [INFO][4826] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673 Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.796 [INFO][4826] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" host="ip-172-31-19-221" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.808 [INFO][4826] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.1/26] block=192.168.86.0/26 handle="k8s-pod-network.c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" host="ip-172-31-19-221" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.808 [INFO][4826] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.1/26] handle="k8s-pod-network.c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" host="ip-172-31-19-221" Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.809 [INFO][4826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:12.897451 containerd[2026]: 2024-12-13 01:55:12.809 [INFO][4826] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.1/26] IPv6=[] ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" HandleID="k8s-pod-network.c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:12.899787 containerd[2026]: 2024-12-13 01:55:12.813 [INFO][4795] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-wfgbb" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0", GenerateName:"calico-apiserver-59b7bf6bf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"46e827e0-09e5-4459-853e-fe2f3072d9fa", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b7bf6bf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"", Pod:"calico-apiserver-59b7bf6bf4-wfgbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5facccc4bbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:12.899787 containerd[2026]: 2024-12-13 01:55:12.814 [INFO][4795] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.1/32] ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-wfgbb" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:12.899787 containerd[2026]: 2024-12-13 01:55:12.814 [INFO][4795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5facccc4bbd ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-wfgbb" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:12.899787 containerd[2026]: 2024-12-13 01:55:12.830 [INFO][4795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-wfgbb" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:12.899787 containerd[2026]: 2024-12-13 01:55:12.831 [INFO][4795] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-wfgbb" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0", GenerateName:"calico-apiserver-59b7bf6bf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"46e827e0-09e5-4459-853e-fe2f3072d9fa", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b7bf6bf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673", Pod:"calico-apiserver-59b7bf6bf4-wfgbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5facccc4bbd", MAC:"5e:0d:91:46:46:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:12.899787 containerd[2026]: 2024-12-13 01:55:12.879 [INFO][4795] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-wfgbb" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:13.000899 containerd[2026]: time="2024-12-13T01:55:13.000836493Z" level=info msg="StopPodSandbox for \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\"" Dec 13 01:55:13.035148 systemd-networkd[1947]: cali8e12bb98b7a: Link UP Dec 13 01:55:13.038899 systemd-networkd[1947]: cali8e12bb98b7a: Gained carrier Dec 13 01:55:13.070673 containerd[2026]: time="2024-12-13T01:55:13.069385977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:13.071408 containerd[2026]: time="2024-12-13T01:55:13.069504177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:13.071408 containerd[2026]: time="2024-12-13T01:55:13.071335917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:13.072091 containerd[2026]: time="2024-12-13T01:55:13.071899233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.568 [INFO][4805] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0 calico-apiserver-59b7bf6bf4- calico-apiserver 40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969 773 0 2024-12-13 01:54:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59b7bf6bf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-221 calico-apiserver-59b7bf6bf4-l7swh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e12bb98b7a [] []}} ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-l7swh" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.569 [INFO][4805] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-l7swh" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.721 [INFO][4829] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" HandleID="k8s-pod-network.13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.758 [INFO][4829] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" HandleID="k8s-pod-network.13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000283690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-221", "pod":"calico-apiserver-59b7bf6bf4-l7swh", "timestamp":"2024-12-13 01:55:12.721316196 +0000 UTC"}, Hostname:"ip-172-31-19-221", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.758 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.810 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.810 [INFO][4829] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-221' Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.880 [INFO][4829] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" host="ip-172-31-19-221" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.910 [INFO][4829] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-221" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.930 [INFO][4829] ipam/ipam.go 489: Trying affinity for 192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.937 [INFO][4829] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.950 [INFO][4829] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.951 [INFO][4829] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" host="ip-172-31-19-221" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.960 [INFO][4829] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7 Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.975 [INFO][4829] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" host="ip-172-31-19-221" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.997 [INFO][4829] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.2/26] block=192.168.86.0/26 handle="k8s-pod-network.13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" host="ip-172-31-19-221" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.998 [INFO][4829] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.2/26] handle="k8s-pod-network.13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" host="ip-172-31-19-221" Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.998 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:13.095102 containerd[2026]: 2024-12-13 01:55:12.998 [INFO][4829] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.2/26] IPv6=[] ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" HandleID="k8s-pod-network.13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:13.098534 containerd[2026]: 2024-12-13 01:55:13.015 [INFO][4805] cni-plugin/k8s.go 386: Populated endpoint ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-l7swh" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0", GenerateName:"calico-apiserver-59b7bf6bf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b7bf6bf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"", Pod:"calico-apiserver-59b7bf6bf4-l7swh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e12bb98b7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:13.098534 containerd[2026]: 2024-12-13 01:55:13.016 [INFO][4805] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.2/32] ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-l7swh" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:13.098534 containerd[2026]: 2024-12-13 01:55:13.017 [INFO][4805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e12bb98b7a ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-l7swh" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:13.098534 containerd[2026]: 2024-12-13 01:55:13.036 [INFO][4805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-l7swh" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:13.098534 containerd[2026]: 2024-12-13 01:55:13.038 [INFO][4805] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-l7swh" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0", GenerateName:"calico-apiserver-59b7bf6bf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b7bf6bf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7", Pod:"calico-apiserver-59b7bf6bf4-l7swh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e12bb98b7a", MAC:"9e:f1:9d:be:82:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:13.098534 containerd[2026]: 2024-12-13 01:55:13.085 [INFO][4805] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7" Namespace="calico-apiserver" Pod="calico-apiserver-59b7bf6bf4-l7swh" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:13.155984 systemd[1]: Started cri-containerd-c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673.scope - libcontainer container c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673. Dec 13 01:55:13.202912 containerd[2026]: time="2024-12-13T01:55:13.202280290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:13.202912 containerd[2026]: time="2024-12-13T01:55:13.202430230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:13.202912 containerd[2026]: time="2024-12-13T01:55:13.202468246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:13.205065 containerd[2026]: time="2024-12-13T01:55:13.204759670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:13.260167 systemd[1]: Started cri-containerd-13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7.scope - libcontainer container 13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7. Dec 13 01:55:13.329052 containerd[2026]: time="2024-12-13T01:55:13.328992191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b7bf6bf4-wfgbb,Uid:46e827e0-09e5-4459-853e-fe2f3072d9fa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673\"" Dec 13 01:55:13.336550 containerd[2026]: time="2024-12-13T01:55:13.336278075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.269 [INFO][4897] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.269 [INFO][4897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" iface="eth0" netns="/var/run/netns/cni-5167fa27-d6bf-0058-ac27-b45e5f335439" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.270 [INFO][4897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" iface="eth0" netns="/var/run/netns/cni-5167fa27-d6bf-0058-ac27-b45e5f335439" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.270 [INFO][4897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" iface="eth0" netns="/var/run/netns/cni-5167fa27-d6bf-0058-ac27-b45e5f335439" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.270 [INFO][4897] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.270 [INFO][4897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.355 [INFO][4956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" HandleID="k8s-pod-network.6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.356 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.356 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.380 [WARNING][4956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" HandleID="k8s-pod-network.6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.381 [INFO][4956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" HandleID="k8s-pod-network.6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.384 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:13.394639 containerd[2026]: 2024-12-13 01:55:13.391 [INFO][4897] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:13.400889 containerd[2026]: time="2024-12-13T01:55:13.394871819Z" level=info msg="TearDown network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\" successfully" Dec 13 01:55:13.400889 containerd[2026]: time="2024-12-13T01:55:13.394925879Z" level=info msg="StopPodSandbox for \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\" returns successfully" Dec 13 01:55:13.400889 containerd[2026]: time="2024-12-13T01:55:13.396264935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fdk95,Uid:17858bd8-f036-4bd4-834e-afda00a53d7c,Namespace:kube-system,Attempt:1,}" Dec 13 01:55:13.406757 systemd[1]: run-netns-cni\x2d5167fa27\x2dd6bf\x2d0058\x2dac27\x2db45e5f335439.mount: Deactivated successfully. Dec 13 01:55:13.474326 containerd[2026]: time="2024-12-13T01:55:13.474002051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b7bf6bf4-l7swh,Uid:40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7\"" Dec 13 01:55:13.539923 systemd-networkd[1947]: vxlan.calico: Gained IPv6LL Dec 13 01:55:13.668125 systemd-networkd[1947]: calif7b1f315173: Link UP Dec 13 01:55:13.671658 systemd-networkd[1947]: calif7b1f315173: Gained carrier Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.528 [INFO][4978] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0 coredns-6f6b679f8f- kube-system 17858bd8-f036-4bd4-834e-afda00a53d7c 783 0 2024-12-13 01:54:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-221 coredns-6f6b679f8f-fdk95 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7b1f315173 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Namespace="kube-system" Pod="coredns-6f6b679f8f-fdk95" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.528 [INFO][4978] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Namespace="kube-system" Pod="coredns-6f6b679f8f-fdk95" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.587 [INFO][4998] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" HandleID="k8s-pod-network.49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.603 [INFO][4998] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" HandleID="k8s-pod-network.49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000265a90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-221", "pod":"coredns-6f6b679f8f-fdk95", "timestamp":"2024-12-13 01:55:13.58732968 +0000 UTC"}, Hostname:"ip-172-31-19-221", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.604 [INFO][4998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.604 [INFO][4998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.604 [INFO][4998] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-221' Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.607 [INFO][4998] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" host="ip-172-31-19-221" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.615 [INFO][4998] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-221" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.623 [INFO][4998] ipam/ipam.go 489: Trying affinity for 192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.627 [INFO][4998] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.630 [INFO][4998] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.631 [INFO][4998] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" host="ip-172-31-19-221" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.633 [INFO][4998] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.641 [INFO][4998] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" host="ip-172-31-19-221" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.652 [INFO][4998] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.3/26] block=192.168.86.0/26 handle="k8s-pod-network.49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" host="ip-172-31-19-221" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.652 [INFO][4998] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.3/26] handle="k8s-pod-network.49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" host="ip-172-31-19-221" Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.652 [INFO][4998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:13.703108 containerd[2026]: 2024-12-13 01:55:13.652 [INFO][4998] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.3/26] IPv6=[] ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" HandleID="k8s-pod-network.49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.706315 containerd[2026]: 2024-12-13 01:55:13.658 [INFO][4978] cni-plugin/k8s.go 386: Populated endpoint ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Namespace="kube-system" Pod="coredns-6f6b679f8f-fdk95" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"17858bd8-f036-4bd4-834e-afda00a53d7c", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"", Pod:"coredns-6f6b679f8f-fdk95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7b1f315173", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:13.706315 containerd[2026]: 2024-12-13 01:55:13.659 [INFO][4978] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.3/32] ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Namespace="kube-system" Pod="coredns-6f6b679f8f-fdk95" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.706315 containerd[2026]: 2024-12-13 01:55:13.659 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7b1f315173 ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Namespace="kube-system" Pod="coredns-6f6b679f8f-fdk95" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.706315 containerd[2026]: 2024-12-13 01:55:13.669 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Namespace="kube-system" Pod="coredns-6f6b679f8f-fdk95" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.706315 containerd[2026]: 2024-12-13 01:55:13.671 [INFO][4978] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Namespace="kube-system" Pod="coredns-6f6b679f8f-fdk95" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"17858bd8-f036-4bd4-834e-afda00a53d7c", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b", Pod:"coredns-6f6b679f8f-fdk95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7b1f315173", MAC:"ea:9c:e2:32:2e:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:13.707326 containerd[2026]: 2024-12-13 01:55:13.689 [INFO][4978] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b" Namespace="kube-system" Pod="coredns-6f6b679f8f-fdk95" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:13.767890 containerd[2026]: time="2024-12-13T01:55:13.767283001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:13.767890 containerd[2026]: time="2024-12-13T01:55:13.767815549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:13.767890 containerd[2026]: time="2024-12-13T01:55:13.767876941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:13.768477 containerd[2026]: time="2024-12-13T01:55:13.768246817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:13.813823 systemd[1]: Started cri-containerd-49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b.scope - libcontainer container 49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b. Dec 13 01:55:13.898636 containerd[2026]: time="2024-12-13T01:55:13.898452745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fdk95,Uid:17858bd8-f036-4bd4-834e-afda00a53d7c,Namespace:kube-system,Attempt:1,} returns sandbox id \"49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b\"" Dec 13 01:55:13.908232 containerd[2026]: time="2024-12-13T01:55:13.907996765Z" level=info msg="CreateContainer within sandbox \"49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:13.930325 containerd[2026]: time="2024-12-13T01:55:13.930119726Z" level=info msg="CreateContainer within sandbox \"49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21e4d3b820b9cdb8524d8abf1d002c49ea82359fdccefc325e68e5832521626c\"" Dec 13 01:55:13.933125 containerd[2026]: time="2024-12-13T01:55:13.933045602Z" level=info msg="StartContainer for \"21e4d3b820b9cdb8524d8abf1d002c49ea82359fdccefc325e68e5832521626c\"" Dec 13 01:55:13.985817 systemd[1]: Started cri-containerd-21e4d3b820b9cdb8524d8abf1d002c49ea82359fdccefc325e68e5832521626c.scope - libcontainer container 21e4d3b820b9cdb8524d8abf1d002c49ea82359fdccefc325e68e5832521626c. Dec 13 01:55:13.995946 containerd[2026]: time="2024-12-13T01:55:13.995878574Z" level=info msg="StopPodSandbox for \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\"" Dec 13 01:55:13.999400 containerd[2026]: time="2024-12-13T01:55:13.996712694Z" level=info msg="StopPodSandbox for \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\"" Dec 13 01:55:14.085477 containerd[2026]: time="2024-12-13T01:55:14.085321786Z" level=info msg="StartContainer for \"21e4d3b820b9cdb8524d8abf1d002c49ea82359fdccefc325e68e5832521626c\" returns successfully" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.162 [INFO][5111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.162 [INFO][5111] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" iface="eth0" netns="/var/run/netns/cni-c1eabc4a-894d-8aa8-ecac-9d20017cc0e2" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.168 [INFO][5111] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" iface="eth0" netns="/var/run/netns/cni-c1eabc4a-894d-8aa8-ecac-9d20017cc0e2" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.173 [INFO][5111] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" iface="eth0" netns="/var/run/netns/cni-c1eabc4a-894d-8aa8-ecac-9d20017cc0e2" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.173 [INFO][5111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.173 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.331 [INFO][5134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" HandleID="k8s-pod-network.9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.332 [INFO][5134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.333 [INFO][5134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.353 [WARNING][5134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" HandleID="k8s-pod-network.9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.353 [INFO][5134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" HandleID="k8s-pod-network.9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.360 [INFO][5134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:14.367813 containerd[2026]: 2024-12-13 01:55:14.364 [INFO][5111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:14.372167 containerd[2026]: time="2024-12-13T01:55:14.368685420Z" level=info msg="TearDown network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\" successfully" Dec 13 01:55:14.372167 containerd[2026]: time="2024-12-13T01:55:14.368728860Z" level=info msg="StopPodSandbox for \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\" returns successfully" Dec 13 01:55:14.376851 containerd[2026]: time="2024-12-13T01:55:14.376681920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j259d,Uid:44b75188-75ae-44a3-965d-98692905f7b3,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:14.377673 systemd[1]: run-netns-cni\x2dc1eabc4a\x2d894d\x2d8aa8\x2decac\x2d9d20017cc0e2.mount: Deactivated successfully. Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.221 [INFO][5112] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.221 [INFO][5112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" iface="eth0" netns="/var/run/netns/cni-3bae907c-fb40-0e8a-69ac-af442183ad16" Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.222 [INFO][5112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" iface="eth0" netns="/var/run/netns/cni-3bae907c-fb40-0e8a-69ac-af442183ad16" Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.223 [INFO][5112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" iface="eth0" netns="/var/run/netns/cni-3bae907c-fb40-0e8a-69ac-af442183ad16" Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.223 [INFO][5112] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.223 [INFO][5112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.381 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" HandleID="k8s-pod-network.cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.381 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.381 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.400 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" HandleID="k8s-pod-network.cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.400 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" HandleID="k8s-pod-network.cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.408 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:14.426314 containerd[2026]: 2024-12-13 01:55:14.422 [INFO][5112] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:14.431624 containerd[2026]: time="2024-12-13T01:55:14.428962404Z" level=info msg="TearDown network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\" successfully" Dec 13 01:55:14.431624 containerd[2026]: time="2024-12-13T01:55:14.429011028Z" level=info msg="StopPodSandbox for \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\" returns successfully" Dec 13 01:55:14.437269 containerd[2026]: time="2024-12-13T01:55:14.435063780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t284s,Uid:a10cc81f-5a1f-43e0-b9b3-7b3335bef263,Namespace:kube-system,Attempt:1,}" Dec 13 01:55:14.436120 systemd[1]: run-netns-cni\x2d3bae907c\x2dfb40\x2d0e8a\x2d69ac\x2daf442183ad16.mount: Deactivated successfully. Dec 13 01:55:14.621446 kubelet[3338]: I1213 01:55:14.621200 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fdk95" podStartSLOduration=38.621147937 podStartE2EDuration="38.621147937s" podCreationTimestamp="2024-12-13 01:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:14.521312556 +0000 UTC m=+43.793896238" watchObservedRunningTime="2024-12-13 01:55:14.621147937 +0000 UTC m=+43.893731583" Dec 13 01:55:14.820150 systemd-networkd[1947]: cali8e12bb98b7a: Gained IPv6LL Dec 13 01:55:14.884823 systemd-networkd[1947]: cali5facccc4bbd: Gained IPv6LL Dec 13 01:55:14.925310 systemd-networkd[1947]: cali611c37eeb05: Link UP Dec 13 01:55:14.928336 systemd-networkd[1947]: cali611c37eeb05: Gained carrier Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.571 [INFO][5149] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0 csi-node-driver- calico-system 44b75188-75ae-44a3-965d-98692905f7b3 795 0 2024-12-13 01:54:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-221 csi-node-driver-j259d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali611c37eeb05 [] []}} ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Namespace="calico-system" Pod="csi-node-driver-j259d" WorkloadEndpoint="ip--172--31--19--221-k8s-csi--node--driver--j259d-" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.575 [INFO][5149] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Namespace="calico-system" Pod="csi-node-driver-j259d" WorkloadEndpoint="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.749 [INFO][5171] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" HandleID="k8s-pod-network.b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.793 [INFO][5171] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" HandleID="k8s-pod-network.b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8550), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-221", "pod":"csi-node-driver-j259d", "timestamp":"2024-12-13 01:55:14.749938826 +0000 UTC"}, Hostname:"ip-172-31-19-221", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.794 [INFO][5171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.794 [INFO][5171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.794 [INFO][5171] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-221' Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.802 [INFO][5171] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" host="ip-172-31-19-221" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.818 [INFO][5171] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-221" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.851 [INFO][5171] ipam/ipam.go 489: Trying affinity for 192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.860 [INFO][5171] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.871 [INFO][5171] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.871 [INFO][5171] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" host="ip-172-31-19-221" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.875 [INFO][5171] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50 Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.883 [INFO][5171] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" host="ip-172-31-19-221" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.903 [INFO][5171] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.4/26] block=192.168.86.0/26 handle="k8s-pod-network.b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" host="ip-172-31-19-221" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.904 [INFO][5171] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.4/26] handle="k8s-pod-network.b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" host="ip-172-31-19-221" Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.904 [INFO][5171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:14.976085 containerd[2026]: 2024-12-13 01:55:14.905 [INFO][5171] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.4/26] IPv6=[] ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" HandleID="k8s-pod-network.b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:14.979699 containerd[2026]: 2024-12-13 01:55:14.915 [INFO][5149] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Namespace="calico-system" Pod="csi-node-driver-j259d" WorkloadEndpoint="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44b75188-75ae-44a3-965d-98692905f7b3", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"", Pod:"csi-node-driver-j259d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali611c37eeb05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:14.979699 containerd[2026]: 2024-12-13 01:55:14.915 [INFO][5149] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.4/32] ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Namespace="calico-system" Pod="csi-node-driver-j259d" WorkloadEndpoint="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:14.979699 containerd[2026]: 2024-12-13 01:55:14.915 [INFO][5149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali611c37eeb05 ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Namespace="calico-system" Pod="csi-node-driver-j259d" WorkloadEndpoint="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:14.979699 containerd[2026]: 2024-12-13 01:55:14.930 [INFO][5149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Namespace="calico-system" Pod="csi-node-driver-j259d" WorkloadEndpoint="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:14.979699 containerd[2026]: 2024-12-13 01:55:14.930 [INFO][5149] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Namespace="calico-system" Pod="csi-node-driver-j259d" WorkloadEndpoint="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44b75188-75ae-44a3-965d-98692905f7b3", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50", Pod:"csi-node-driver-j259d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali611c37eeb05", MAC:"6e:3f:73:b6:48:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:14.979699 containerd[2026]: 2024-12-13 01:55:14.970 [INFO][5149] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50" Namespace="calico-system" Pod="csi-node-driver-j259d" WorkloadEndpoint="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:15.004133 containerd[2026]: time="2024-12-13T01:55:15.002795903Z" level=info msg="StopPodSandbox for \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\"" Dec 13 01:55:15.069832 systemd-networkd[1947]: cali02dba7d584e: Link UP Dec 13 01:55:15.070413 systemd-networkd[1947]: cali02dba7d584e: Gained carrier Dec 13 01:55:15.111139 containerd[2026]: time="2024-12-13T01:55:15.109563827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:15.111139 containerd[2026]: time="2024-12-13T01:55:15.109758587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:15.111139 containerd[2026]: time="2024-12-13T01:55:15.109791527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:15.111139 containerd[2026]: time="2024-12-13T01:55:15.110747891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.726 [INFO][5160] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0 coredns-6f6b679f8f- kube-system a10cc81f-5a1f-43e0-b9b3-7b3335bef263 796 0 2024-12-13 01:54:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-221 coredns-6f6b679f8f-t284s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02dba7d584e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Namespace="kube-system" Pod="coredns-6f6b679f8f-t284s" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.729 [INFO][5160] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Namespace="kube-system" Pod="coredns-6f6b679f8f-t284s" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.883 [INFO][5179] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" HandleID="k8s-pod-network.834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.921 [INFO][5179] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" HandleID="k8s-pod-network.834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c340), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-221", "pod":"coredns-6f6b679f8f-t284s", "timestamp":"2024-12-13 01:55:14.883482518 +0000 UTC"}, Hostname:"ip-172-31-19-221", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.922 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.922 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.922 [INFO][5179] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-221' Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.932 [INFO][5179] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" host="ip-172-31-19-221" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.948 [INFO][5179] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-221" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.975 [INFO][5179] ipam/ipam.go 489: Trying affinity for 192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.982 [INFO][5179] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.991 [INFO][5179] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:14.991 [INFO][5179] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" host="ip-172-31-19-221" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:15.002 [INFO][5179] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:15.016 [INFO][5179] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" host="ip-172-31-19-221" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:15.040 [INFO][5179] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.5/26] block=192.168.86.0/26 handle="k8s-pod-network.834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" host="ip-172-31-19-221" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:15.041 [INFO][5179] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.5/26] handle="k8s-pod-network.834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" host="ip-172-31-19-221" Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:15.041 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:15.156492 containerd[2026]: 2024-12-13 01:55:15.041 [INFO][5179] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.5/26] IPv6=[] ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" HandleID="k8s-pod-network.834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:15.162119 containerd[2026]: 2024-12-13 01:55:15.058 [INFO][5160] cni-plugin/k8s.go 386: Populated endpoint ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Namespace="kube-system" Pod="coredns-6f6b679f8f-t284s" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a10cc81f-5a1f-43e0-b9b3-7b3335bef263", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"", Pod:"coredns-6f6b679f8f-t284s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02dba7d584e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:15.162119 containerd[2026]: 2024-12-13 01:55:15.059 [INFO][5160] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.5/32] ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Namespace="kube-system" Pod="coredns-6f6b679f8f-t284s" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:15.162119 containerd[2026]: 2024-12-13 01:55:15.060 [INFO][5160] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02dba7d584e ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Namespace="kube-system" Pod="coredns-6f6b679f8f-t284s" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:15.162119 containerd[2026]: 2024-12-13 01:55:15.069 [INFO][5160] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Namespace="kube-system" Pod="coredns-6f6b679f8f-t284s" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:15.162119 containerd[2026]: 2024-12-13 01:55:15.080 [INFO][5160] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Namespace="kube-system" Pod="coredns-6f6b679f8f-t284s" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a10cc81f-5a1f-43e0-b9b3-7b3335bef263", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc", Pod:"coredns-6f6b679f8f-t284s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02dba7d584e", MAC:"36:a5:f4:8f:6d:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:15.166411 containerd[2026]: 2024-12-13 01:55:15.137 [INFO][5160] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc" Namespace="kube-system" Pod="coredns-6f6b679f8f-t284s" WorkloadEndpoint="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:15.171238 systemd[1]: Started cri-containerd-b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50.scope - libcontainer container b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50. Dec 13 01:55:15.241359 containerd[2026]: time="2024-12-13T01:55:15.241151196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:15.242156 containerd[2026]: time="2024-12-13T01:55:15.241319232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:15.242156 containerd[2026]: time="2024-12-13T01:55:15.241370052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:15.242156 containerd[2026]: time="2024-12-13T01:55:15.241587060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:15.301341 systemd[1]: Started cri-containerd-834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc.scope - libcontainer container 834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc. Dec 13 01:55:15.333657 systemd-networkd[1947]: calif7b1f315173: Gained IPv6LL Dec 13 01:55:15.437811 containerd[2026]: time="2024-12-13T01:55:15.435761197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j259d,Uid:44b75188-75ae-44a3-965d-98692905f7b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50\"" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.279 [INFO][5222] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.279 [INFO][5222] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" iface="eth0" netns="/var/run/netns/cni-07b8a5fe-f010-1b20-3643-90142aa75999" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.281 [INFO][5222] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" iface="eth0" netns="/var/run/netns/cni-07b8a5fe-f010-1b20-3643-90142aa75999" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.283 [INFO][5222] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" iface="eth0" netns="/var/run/netns/cni-07b8a5fe-f010-1b20-3643-90142aa75999" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.283 [INFO][5222] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.283 [INFO][5222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.413 [INFO][5295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" HandleID="k8s-pod-network.d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.413 [INFO][5295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.413 [INFO][5295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.440 [WARNING][5295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" HandleID="k8s-pod-network.d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.441 [INFO][5295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" HandleID="k8s-pod-network.d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.447 [INFO][5295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:15.456558 containerd[2026]: 2024-12-13 01:55:15.453 [INFO][5222] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:15.459944 containerd[2026]: time="2024-12-13T01:55:15.457702369Z" level=info msg="TearDown network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\" successfully" Dec 13 01:55:15.462607 containerd[2026]: time="2024-12-13T01:55:15.457752877Z" level=info msg="StopPodSandbox for \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\" returns successfully" Dec 13 01:55:15.465419 systemd[1]: run-netns-cni\x2d07b8a5fe\x2df010\x2d1b20\x2d3643\x2d90142aa75999.mount: Deactivated successfully. Dec 13 01:55:15.467384 containerd[2026]: time="2024-12-13T01:55:15.466062337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f6ff568d-sz27l,Uid:98155941-7b7d-48f1-80f8-b0abe7a2cd77,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:15.475875 containerd[2026]: time="2024-12-13T01:55:15.475790533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t284s,Uid:a10cc81f-5a1f-43e0-b9b3-7b3335bef263,Namespace:kube-system,Attempt:1,} returns sandbox id \"834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc\"" Dec 13 01:55:15.485160 containerd[2026]: time="2024-12-13T01:55:15.485089429Z" level=info msg="CreateContainer within sandbox \"834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:15.550581 containerd[2026]: time="2024-12-13T01:55:15.550397390Z" level=info msg="CreateContainer within sandbox \"834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e66ee76c0c7b8b05effe58319f27f486368adc5fdc153f4706b2bf135843b5cc\"" Dec 13 01:55:15.551862 containerd[2026]: time="2024-12-13T01:55:15.551714942Z" level=info msg="StartContainer for \"e66ee76c0c7b8b05effe58319f27f486368adc5fdc153f4706b2bf135843b5cc\"" Dec 13 01:55:15.617991 systemd[1]: Started cri-containerd-e66ee76c0c7b8b05effe58319f27f486368adc5fdc153f4706b2bf135843b5cc.scope - libcontainer container e66ee76c0c7b8b05effe58319f27f486368adc5fdc153f4706b2bf135843b5cc. Dec 13 01:55:15.718051 containerd[2026]: time="2024-12-13T01:55:15.717725366Z" level=info msg="StartContainer for \"e66ee76c0c7b8b05effe58319f27f486368adc5fdc153f4706b2bf135843b5cc\" returns successfully" Dec 13 01:55:15.883473 systemd-networkd[1947]: calia629fbd93cf: Link UP Dec 13 01:55:15.885572 systemd-networkd[1947]: calia629fbd93cf: Gained carrier Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.666 [INFO][5328] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0 calico-kube-controllers-9f6ff568d- calico-system 98155941-7b7d-48f1-80f8-b0abe7a2cd77 814 0 2024-12-13 01:54:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9f6ff568d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-221 calico-kube-controllers-9f6ff568d-sz27l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia629fbd93cf [] []}} ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Namespace="calico-system" Pod="calico-kube-controllers-9f6ff568d-sz27l" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.667 [INFO][5328] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Namespace="calico-system" Pod="calico-kube-controllers-9f6ff568d-sz27l" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.751 [INFO][5365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" HandleID="k8s-pod-network.beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.781 [INFO][5365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" HandleID="k8s-pod-network.beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316560), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-221", "pod":"calico-kube-controllers-9f6ff568d-sz27l", "timestamp":"2024-12-13 01:55:15.751909095 +0000 UTC"}, Hostname:"ip-172-31-19-221", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.781 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.781 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.782 [INFO][5365] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-221' Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.789 [INFO][5365] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" host="ip-172-31-19-221" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.799 [INFO][5365] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-221" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.820 [INFO][5365] ipam/ipam.go 489: Trying affinity for 192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.843 [INFO][5365] ipam/ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.852 [INFO][5365] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ip-172-31-19-221" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.852 [INFO][5365] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" host="ip-172-31-19-221" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.855 [INFO][5365] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6 Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.863 [INFO][5365] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" host="ip-172-31-19-221" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.874 [INFO][5365] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.86.6/26] block=192.168.86.0/26 handle="k8s-pod-network.beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" host="ip-172-31-19-221" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.874 [INFO][5365] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.6/26] handle="k8s-pod-network.beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" host="ip-172-31-19-221" Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.874 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:15.931481 containerd[2026]: 2024-12-13 01:55:15.874 [INFO][5365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.6/26] IPv6=[] ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" HandleID="k8s-pod-network.beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:15.936021 containerd[2026]: 2024-12-13 01:55:15.877 [INFO][5328] cni-plugin/k8s.go 386: Populated endpoint ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Namespace="calico-system" Pod="calico-kube-controllers-9f6ff568d-sz27l" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0", GenerateName:"calico-kube-controllers-9f6ff568d-", Namespace:"calico-system", SelfLink:"", UID:"98155941-7b7d-48f1-80f8-b0abe7a2cd77", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f6ff568d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"", Pod:"calico-kube-controllers-9f6ff568d-sz27l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia629fbd93cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:15.936021 containerd[2026]: 2024-12-13 01:55:15.878 [INFO][5328] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.86.6/32] ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Namespace="calico-system" Pod="calico-kube-controllers-9f6ff568d-sz27l" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:15.936021 containerd[2026]: 2024-12-13 01:55:15.878 [INFO][5328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia629fbd93cf ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Namespace="calico-system" Pod="calico-kube-controllers-9f6ff568d-sz27l" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:15.936021 containerd[2026]: 2024-12-13 01:55:15.887 [INFO][5328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Namespace="calico-system" Pod="calico-kube-controllers-9f6ff568d-sz27l" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:15.936021 containerd[2026]: 2024-12-13 01:55:15.891 [INFO][5328] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Namespace="calico-system" Pod="calico-kube-controllers-9f6ff568d-sz27l" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0", GenerateName:"calico-kube-controllers-9f6ff568d-", Namespace:"calico-system", SelfLink:"", UID:"98155941-7b7d-48f1-80f8-b0abe7a2cd77", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f6ff568d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6", Pod:"calico-kube-controllers-9f6ff568d-sz27l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia629fbd93cf", MAC:"56:a8:f7:de:d6:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:15.936021 containerd[2026]: 2024-12-13 01:55:15.922 [INFO][5328] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6" Namespace="calico-system" Pod="calico-kube-controllers-9f6ff568d-sz27l" WorkloadEndpoint="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:16.010234 containerd[2026]: time="2024-12-13T01:55:16.009147192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:16.010234 containerd[2026]: time="2024-12-13T01:55:16.009276912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:16.010234 containerd[2026]: time="2024-12-13T01:55:16.009318420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:16.010804 containerd[2026]: time="2024-12-13T01:55:16.010409148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:16.058882 systemd[1]: Started cri-containerd-beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6.scope - libcontainer container beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6. Dec 13 01:55:16.100387 systemd-networkd[1947]: cali611c37eeb05: Gained IPv6LL Dec 13 01:55:16.141463 containerd[2026]: time="2024-12-13T01:55:16.141395257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9f6ff568d-sz27l,Uid:98155941-7b7d-48f1-80f8-b0abe7a2cd77,Namespace:calico-system,Attempt:1,} returns sandbox id \"beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6\"" Dec 13 01:55:16.641309 kubelet[3338]: I1213 01:55:16.641192 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-t284s" podStartSLOduration=40.641167899 podStartE2EDuration="40.641167899s" podCreationTimestamp="2024-12-13 01:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:16.595021875 +0000 UTC m=+45.867605545" watchObservedRunningTime="2024-12-13 01:55:16.641167899 +0000 UTC m=+45.913751533" Dec 13 01:55:16.740874 systemd-networkd[1947]: cali02dba7d584e: Gained IPv6LL Dec 13 01:55:17.379790 systemd-networkd[1947]: calia629fbd93cf: Gained IPv6LL Dec 13 01:55:17.886782 containerd[2026]: time="2024-12-13T01:55:17.886443917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:17.890486 containerd[2026]: time="2024-12-13T01:55:17.890389157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:55:17.892493 containerd[2026]: time="2024-12-13T01:55:17.892404437Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:17.900665 containerd[2026]: time="2024-12-13T01:55:17.900593513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:17.907853 containerd[2026]: time="2024-12-13T01:55:17.907786061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.570997074s" Dec 13 01:55:17.909843 containerd[2026]: time="2024-12-13T01:55:17.908122637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:55:17.944877 containerd[2026]: time="2024-12-13T01:55:17.944798501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:55:17.985122 containerd[2026]: time="2024-12-13T01:55:17.984963870Z" level=info msg="CreateContainer within sandbox \"c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:55:18.017974 containerd[2026]: time="2024-12-13T01:55:18.017912378Z" level=info msg="CreateContainer within sandbox \"c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af9185b1d3941d6896b2a8b63a9105fa8922b25d54cce797a71d391a2d8a71dd\"" Dec 13 01:55:18.022499 containerd[2026]: time="2024-12-13T01:55:18.022403570Z" level=info msg="StartContainer for \"af9185b1d3941d6896b2a8b63a9105fa8922b25d54cce797a71d391a2d8a71dd\"" Dec 13 01:55:18.122298 systemd[1]: Started cri-containerd-af9185b1d3941d6896b2a8b63a9105fa8922b25d54cce797a71d391a2d8a71dd.scope - libcontainer container af9185b1d3941d6896b2a8b63a9105fa8922b25d54cce797a71d391a2d8a71dd. Dec 13 01:55:18.237268 containerd[2026]: time="2024-12-13T01:55:18.236874279Z" level=info msg="StartContainer for \"af9185b1d3941d6896b2a8b63a9105fa8922b25d54cce797a71d391a2d8a71dd\" returns successfully" Dec 13 01:55:18.266409 containerd[2026]: time="2024-12-13T01:55:18.264731439Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.267187 containerd[2026]: time="2024-12-13T01:55:18.267120903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:55:18.283506 containerd[2026]: time="2024-12-13T01:55:18.283318011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 336.811154ms" Dec 13 01:55:18.283919 containerd[2026]: time="2024-12-13T01:55:18.283735887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:55:18.296403 containerd[2026]: time="2024-12-13T01:55:18.296341671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:55:18.326015 containerd[2026]: time="2024-12-13T01:55:18.325618599Z" level=info msg="CreateContainer within sandbox \"13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:55:18.364487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817878335.mount: Deactivated successfully. Dec 13 01:55:18.371142 containerd[2026]: time="2024-12-13T01:55:18.371053396Z" level=info msg="CreateContainer within sandbox \"13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"65b9b41b6ff9698aa70dd98cc4016922041a97f446cd87d0d31fb899ac06ee22\"" Dec 13 01:55:18.373365 containerd[2026]: time="2024-12-13T01:55:18.373296256Z" level=info msg="StartContainer for \"65b9b41b6ff9698aa70dd98cc4016922041a97f446cd87d0d31fb899ac06ee22\"" Dec 13 01:55:18.457048 systemd[1]: Started cri-containerd-65b9b41b6ff9698aa70dd98cc4016922041a97f446cd87d0d31fb899ac06ee22.scope - libcontainer container 65b9b41b6ff9698aa70dd98cc4016922041a97f446cd87d0d31fb899ac06ee22. Dec 13 01:55:18.695689 containerd[2026]: time="2024-12-13T01:55:18.695568245Z" level=info msg="StartContainer for \"65b9b41b6ff9698aa70dd98cc4016922041a97f446cd87d0d31fb899ac06ee22\" returns successfully" Dec 13 01:55:19.738113 ntpd[1999]: Listen normally on 7 vxlan.calico 192.168.86.0:123 Dec 13 01:55:19.738264 ntpd[1999]: Listen normally on 8 vxlan.calico [fe80::6434:efff:fe88:fd97%4]:123 Dec 13 01:55:19.738720 ntpd[1999]: 13 Dec 01:55:19 ntpd[1999]: Listen normally on 7 vxlan.calico 192.168.86.0:123 Dec 13 01:55:19.738720 ntpd[1999]: 13 Dec 01:55:19 ntpd[1999]: Listen normally on 8 vxlan.calico [fe80::6434:efff:fe88:fd97%4]:123 Dec 13 01:55:19.738720 ntpd[1999]: 13 Dec 01:55:19 ntpd[1999]: Listen normally on 9 cali5facccc4bbd [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:19.738720 ntpd[1999]: 13 Dec 01:55:19 ntpd[1999]: Listen normally on 10 cali8e12bb98b7a [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:55:19.738720 ntpd[1999]: 13 Dec 01:55:19 ntpd[1999]: Listen normally on 11 calif7b1f315173 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:55:19.738720 ntpd[1999]: 13 Dec 01:55:19 ntpd[1999]: Listen normally on 12 cali611c37eeb05 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:55:19.738720 ntpd[1999]: 13 Dec 01:55:19 ntpd[1999]: Listen normally on 13 cali02dba7d584e [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:55:19.738356 ntpd[1999]: Listen normally on 9 cali5facccc4bbd [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:19.739536 ntpd[1999]: 13 Dec 01:55:19 ntpd[1999]: Listen normally on 14 calia629fbd93cf [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:55:19.738425 ntpd[1999]: Listen normally on 10 cali8e12bb98b7a [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:55:19.738490 ntpd[1999]: Listen normally on 11 calif7b1f315173 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:55:19.738590 ntpd[1999]: Listen normally on 12 cali611c37eeb05 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:55:19.738661 ntpd[1999]: Listen normally on 13 cali02dba7d584e [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:55:19.738729 ntpd[1999]: Listen normally on 14 calia629fbd93cf [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:55:19.775109 kubelet[3338]: I1213 01:55:19.774386 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-wfgbb" podStartSLOduration=30.165840373 podStartE2EDuration="34.774357031s" podCreationTimestamp="2024-12-13 01:54:45 +0000 UTC" firstStartedPulling="2024-12-13 01:55:13.335440847 +0000 UTC m=+42.608024493" lastFinishedPulling="2024-12-13 01:55:17.943957505 +0000 UTC m=+47.216541151" observedRunningTime="2024-12-13 01:55:18.705098321 +0000 UTC m=+47.977682015" watchObservedRunningTime="2024-12-13 01:55:19.774357031 +0000 UTC m=+49.046940929" Dec 13 01:55:20.088086 containerd[2026]: time="2024-12-13T01:55:20.083188420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:20.091861 containerd[2026]: time="2024-12-13T01:55:20.090997792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:55:20.093478 containerd[2026]: time="2024-12-13T01:55:20.093417376Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:20.094035 systemd[1]: Started sshd@9-172.31.19.221:22-139.178.68.195:33422.service - OpenSSH per-connection server daemon (139.178.68.195:33422). Dec 13 01:55:20.122395 containerd[2026]: time="2024-12-13T01:55:20.121486072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:20.127304 containerd[2026]: time="2024-12-13T01:55:20.127227220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.830622233s" Dec 13 01:55:20.127304 containerd[2026]: time="2024-12-13T01:55:20.127294396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:55:20.134720 containerd[2026]: time="2024-12-13T01:55:20.133272784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:55:20.139029 containerd[2026]: time="2024-12-13T01:55:20.138937096Z" level=info msg="CreateContainer within sandbox \"b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:55:20.203653 containerd[2026]: time="2024-12-13T01:55:20.203567861Z" level=info msg="CreateContainer within sandbox \"b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8bc0e6c30047494b2b546b654d89e5ca288ac9bc165dbd2454bd803834eb9a5b\"" Dec 13 01:55:20.206645 containerd[2026]: time="2024-12-13T01:55:20.205751273Z" level=info msg="StartContainer for \"8bc0e6c30047494b2b546b654d89e5ca288ac9bc165dbd2454bd803834eb9a5b\"" Dec 13 01:55:20.351370 sshd[5543]: Accepted publickey for core from 139.178.68.195 port 33422 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:20.352048 systemd[1]: Started cri-containerd-8bc0e6c30047494b2b546b654d89e5ca288ac9bc165dbd2454bd803834eb9a5b.scope - libcontainer container 8bc0e6c30047494b2b546b654d89e5ca288ac9bc165dbd2454bd803834eb9a5b. Dec 13 01:55:20.362422 sshd[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:20.398047 systemd-logind[2004]: New session 10 of user core. Dec 13 01:55:20.409296 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:55:20.657428 kubelet[3338]: I1213 01:55:20.656635 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:20.780592 containerd[2026]: time="2024-12-13T01:55:20.779692412Z" level=info msg="StartContainer for \"8bc0e6c30047494b2b546b654d89e5ca288ac9bc165dbd2454bd803834eb9a5b\" returns successfully" Dec 13 01:55:20.933454 sshd[5543]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:20.942397 systemd-logind[2004]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:55:20.944946 systemd[1]: sshd@9-172.31.19.221:22-139.178.68.195:33422.service: Deactivated successfully. Dec 13 01:55:20.953090 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:55:20.961115 systemd-logind[2004]: Removed session 10. Dec 13 01:55:21.171102 kubelet[3338]: I1213 01:55:21.169711 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59b7bf6bf4-l7swh" podStartSLOduration=31.359656826 podStartE2EDuration="36.169689534s" podCreationTimestamp="2024-12-13 01:54:45 +0000 UTC" firstStartedPulling="2024-12-13 01:55:13.478612535 +0000 UTC m=+42.751196169" lastFinishedPulling="2024-12-13 01:55:18.288645219 +0000 UTC m=+47.561228877" observedRunningTime="2024-12-13 01:55:19.777699187 +0000 UTC m=+49.050282857" watchObservedRunningTime="2024-12-13 01:55:21.169689534 +0000 UTC m=+50.442273180" Dec 13 01:55:22.927581 containerd[2026]: time="2024-12-13T01:55:22.927491998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.932564 containerd[2026]: time="2024-12-13T01:55:22.931951762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:55:22.934669 containerd[2026]: time="2024-12-13T01:55:22.934584418Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.943501 containerd[2026]: time="2024-12-13T01:55:22.943430974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.947864 containerd[2026]: time="2024-12-13T01:55:22.947800270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.81446157s" Dec 13 01:55:22.948042 containerd[2026]: time="2024-12-13T01:55:22.947861542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:55:22.951314 containerd[2026]: time="2024-12-13T01:55:22.951258514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:55:23.012188 containerd[2026]: time="2024-12-13T01:55:23.012094831Z" level=info msg="CreateContainer within sandbox \"beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:55:23.063245 containerd[2026]: time="2024-12-13T01:55:23.062173639Z" level=info msg="CreateContainer within sandbox \"beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"163156bd938fe468f2db365b56738bbe0378239da02d7bd6e7965446f211d5aa\"" Dec 13 01:55:23.064602 containerd[2026]: time="2024-12-13T01:55:23.063648127Z" level=info msg="StartContainer for \"163156bd938fe468f2db365b56738bbe0378239da02d7bd6e7965446f211d5aa\"" Dec 13 01:55:23.161054 systemd[1]: Started cri-containerd-163156bd938fe468f2db365b56738bbe0378239da02d7bd6e7965446f211d5aa.scope - libcontainer container 163156bd938fe468f2db365b56738bbe0378239da02d7bd6e7965446f211d5aa. Dec 13 01:55:23.333620 containerd[2026]: time="2024-12-13T01:55:23.333450464Z" level=info msg="StartContainer for \"163156bd938fe468f2db365b56738bbe0378239da02d7bd6e7965446f211d5aa\" returns successfully" Dec 13 01:55:23.853442 kubelet[3338]: I1213 01:55:23.853327 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9f6ff568d-sz27l" podStartSLOduration=30.048469094 podStartE2EDuration="36.853299263s" podCreationTimestamp="2024-12-13 01:54:47 +0000 UTC" firstStartedPulling="2024-12-13 01:55:16.144863413 +0000 UTC m=+45.417447047" lastFinishedPulling="2024-12-13 01:55:22.949693582 +0000 UTC m=+52.222277216" observedRunningTime="2024-12-13 01:55:23.725841862 +0000 UTC m=+52.998425496" watchObservedRunningTime="2024-12-13 01:55:23.853299263 +0000 UTC m=+53.125882909" Dec 13 01:55:24.777505 containerd[2026]: time="2024-12-13T01:55:24.777420743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.779225 containerd[2026]: time="2024-12-13T01:55:24.778925039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:55:24.781071 containerd[2026]: time="2024-12-13T01:55:24.780950771Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.794030 containerd[2026]: time="2024-12-13T01:55:24.789177191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.794030 containerd[2026]: time="2024-12-13T01:55:24.790825692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.839502138s" Dec 13 01:55:24.794030 containerd[2026]: time="2024-12-13T01:55:24.792690324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:55:24.801842 containerd[2026]: time="2024-12-13T01:55:24.801731904Z" level=info msg="CreateContainer within sandbox \"b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:55:24.834567 containerd[2026]: time="2024-12-13T01:55:24.832878000Z" level=info msg="CreateContainer within sandbox \"b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"35e28c9c882b864e22ad94ba7ef639004525d17a150a81056b4ccd7327917f18\"" Dec 13 01:55:24.840100 containerd[2026]: time="2024-12-13T01:55:24.839825088Z" level=info msg="StartContainer for \"35e28c9c882b864e22ad94ba7ef639004525d17a150a81056b4ccd7327917f18\"" Dec 13 01:55:24.845019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993936930.mount: Deactivated successfully. Dec 13 01:55:24.938855 systemd[1]: Started cri-containerd-35e28c9c882b864e22ad94ba7ef639004525d17a150a81056b4ccd7327917f18.scope - libcontainer container 35e28c9c882b864e22ad94ba7ef639004525d17a150a81056b4ccd7327917f18. Dec 13 01:55:25.065191 containerd[2026]: time="2024-12-13T01:55:25.064674741Z" level=info msg="StartContainer for \"35e28c9c882b864e22ad94ba7ef639004525d17a150a81056b4ccd7327917f18\" returns successfully" Dec 13 01:55:25.379824 kubelet[3338]: I1213 01:55:25.379759 3338 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:55:25.379824 kubelet[3338]: I1213 01:55:25.379814 3338 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:55:25.716874 kubelet[3338]: I1213 01:55:25.714993 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j259d" podStartSLOduration=29.355871965 podStartE2EDuration="38.714970284s" podCreationTimestamp="2024-12-13 01:54:47 +0000 UTC" firstStartedPulling="2024-12-13 01:55:15.438294505 +0000 UTC m=+44.710878151" lastFinishedPulling="2024-12-13 01:55:24.797392836 +0000 UTC m=+54.069976470" observedRunningTime="2024-12-13 01:55:25.714418212 +0000 UTC m=+54.987001882" watchObservedRunningTime="2024-12-13 01:55:25.714970284 +0000 UTC m=+54.987553930" Dec 13 01:55:25.973250 systemd[1]: Started sshd@10-172.31.19.221:22-139.178.68.195:33428.service - OpenSSH per-connection server daemon (139.178.68.195:33428). Dec 13 01:55:26.158748 sshd[5742]: Accepted publickey for core from 139.178.68.195 port 33428 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:26.162320 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:26.172069 systemd-logind[2004]: New session 11 of user core. Dec 13 01:55:26.183656 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:55:26.481665 sshd[5742]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:26.489434 systemd[1]: sshd@10-172.31.19.221:22-139.178.68.195:33428.service: Deactivated successfully. Dec 13 01:55:26.494297 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:55:26.497126 systemd-logind[2004]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:55:26.499088 systemd-logind[2004]: Removed session 11. Dec 13 01:55:31.056916 containerd[2026]: time="2024-12-13T01:55:31.056843355Z" level=info msg="StopPodSandbox for \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\"" Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.127 [WARNING][5771] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0", GenerateName:"calico-apiserver-59b7bf6bf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b7bf6bf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7", Pod:"calico-apiserver-59b7bf6bf4-l7swh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e12bb98b7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.127 [INFO][5771] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.127 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" iface="eth0" netns="" Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.127 [INFO][5771] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.127 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.171 [INFO][5778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" HandleID="k8s-pod-network.25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.172 [INFO][5778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.172 [INFO][5778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.186 [WARNING][5778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" HandleID="k8s-pod-network.25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.187 [INFO][5778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" HandleID="k8s-pod-network.25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.190 [INFO][5778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:31.199509 containerd[2026]: 2024-12-13 01:55:31.193 [INFO][5771] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:31.199509 containerd[2026]: time="2024-12-13T01:55:31.199543971Z" level=info msg="TearDown network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\" successfully" Dec 13 01:55:31.200714 containerd[2026]: time="2024-12-13T01:55:31.199584627Z" level=info msg="StopPodSandbox for \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\" returns successfully" Dec 13 01:55:31.201757 containerd[2026]: time="2024-12-13T01:55:31.200866347Z" level=info msg="RemovePodSandbox for \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\"" Dec 13 01:55:31.201757 containerd[2026]: time="2024-12-13T01:55:31.200936391Z" level=info msg="Forcibly stopping sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\"" Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.279 [WARNING][5798] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0", GenerateName:"calico-apiserver-59b7bf6bf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"40ef9bd9-93b0-4b2f-96bf-8dd3e39ec969", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b7bf6bf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"13b25ad4ff19a11679a4ccd76967f8e41744e140082b8d4461df1704cc0170a7", Pod:"calico-apiserver-59b7bf6bf4-l7swh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e12bb98b7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.279 [INFO][5798] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.279 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" iface="eth0" netns="" Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.280 [INFO][5798] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.280 [INFO][5798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.316 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" HandleID="k8s-pod-network.25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.317 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.317 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.330 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" HandleID="k8s-pod-network.25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.330 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" HandleID="k8s-pod-network.25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--l7swh-eth0" Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.333 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:31.339784 containerd[2026]: 2024-12-13 01:55:31.336 [INFO][5798] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99" Dec 13 01:55:31.339784 containerd[2026]: time="2024-12-13T01:55:31.338842072Z" level=info msg="TearDown network for sandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\" successfully" Dec 13 01:55:31.346689 containerd[2026]: time="2024-12-13T01:55:31.346599352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:31.346839 containerd[2026]: time="2024-12-13T01:55:31.346736392Z" level=info msg="RemovePodSandbox \"25354728fb16c2d65b13d463a46c04a77d23b8332df205d564dd04618945fb99\" returns successfully" Dec 13 01:55:31.348071 containerd[2026]: time="2024-12-13T01:55:31.347909824Z" level=info msg="StopPodSandbox for \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\"" Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.416 [WARNING][5822] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"17858bd8-f036-4bd4-834e-afda00a53d7c", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b", Pod:"coredns-6f6b679f8f-fdk95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7b1f315173", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.416 [INFO][5822] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.416 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" iface="eth0" netns="" Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.416 [INFO][5822] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.416 [INFO][5822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.453 [INFO][5828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" HandleID="k8s-pod-network.6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.453 [INFO][5828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.453 [INFO][5828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.467 [WARNING][5828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" HandleID="k8s-pod-network.6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.467 [INFO][5828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" HandleID="k8s-pod-network.6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.471 [INFO][5828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:31.479253 containerd[2026]: 2024-12-13 01:55:31.476 [INFO][5822] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:31.480909 containerd[2026]: time="2024-12-13T01:55:31.479309153Z" level=info msg="TearDown network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\" successfully" Dec 13 01:55:31.480909 containerd[2026]: time="2024-12-13T01:55:31.479349329Z" level=info msg="StopPodSandbox for \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\" returns successfully" Dec 13 01:55:31.480909 containerd[2026]: time="2024-12-13T01:55:31.480113537Z" level=info msg="RemovePodSandbox for \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\"" Dec 13 01:55:31.480909 containerd[2026]: time="2024-12-13T01:55:31.480183701Z" level=info msg="Forcibly stopping sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\"" Dec 13 01:55:31.535278 systemd[1]: Started sshd@11-172.31.19.221:22-139.178.68.195:54712.service - OpenSSH per-connection server daemon (139.178.68.195:54712). Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.570 [WARNING][5846] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"17858bd8-f036-4bd4-834e-afda00a53d7c", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"49635648be50207ece0435976d8c7e305af4822895e50d4c041e2eaafef0314b", Pod:"coredns-6f6b679f8f-fdk95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7b1f315173", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.570 [INFO][5846] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.570 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" iface="eth0" netns="" Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.570 [INFO][5846] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.570 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.619 [INFO][5855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" HandleID="k8s-pod-network.6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.619 [INFO][5855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.619 [INFO][5855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.658 [WARNING][5855] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" HandleID="k8s-pod-network.6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.658 [INFO][5855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" HandleID="k8s-pod-network.6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--fdk95-eth0" Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.662 [INFO][5855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:31.670345 containerd[2026]: 2024-12-13 01:55:31.667 [INFO][5846] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0" Dec 13 01:55:31.672066 containerd[2026]: time="2024-12-13T01:55:31.671339418Z" level=info msg="TearDown network for sandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\" successfully" Dec 13 01:55:31.680091 containerd[2026]: time="2024-12-13T01:55:31.680032398Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:31.680745 containerd[2026]: time="2024-12-13T01:55:31.680289102Z" level=info msg="RemovePodSandbox \"6c7481dab447df4743e31dcff2a8f254f20e05a41aaa83f5a097db8c179dd5a0\" returns successfully" Dec 13 01:55:31.682161 containerd[2026]: time="2024-12-13T01:55:31.681200478Z" level=info msg="StopPodSandbox for \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\"" Dec 13 01:55:31.757194 sshd[5852]: Accepted publickey for core from 139.178.68.195 port 54712 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:31.759786 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:31.771884 systemd-logind[2004]: New session 12 of user core. Dec 13 01:55:31.782567 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.793 [WARNING][5874] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0", GenerateName:"calico-kube-controllers-9f6ff568d-", Namespace:"calico-system", SelfLink:"", UID:"98155941-7b7d-48f1-80f8-b0abe7a2cd77", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f6ff568d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6", Pod:"calico-kube-controllers-9f6ff568d-sz27l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia629fbd93cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.794 [INFO][5874] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.794 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" iface="eth0" netns="" Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.794 [INFO][5874] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.794 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.834 [INFO][5882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" HandleID="k8s-pod-network.d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.834 [INFO][5882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.834 [INFO][5882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.847 [WARNING][5882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" HandleID="k8s-pod-network.d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.847 [INFO][5882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" HandleID="k8s-pod-network.d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.850 [INFO][5882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:31.855516 containerd[2026]: 2024-12-13 01:55:31.853 [INFO][5874] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:31.856600 containerd[2026]: time="2024-12-13T01:55:31.855609487Z" level=info msg="TearDown network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\" successfully" Dec 13 01:55:31.856600 containerd[2026]: time="2024-12-13T01:55:31.855649567Z" level=info msg="StopPodSandbox for \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\" returns successfully" Dec 13 01:55:31.857378 containerd[2026]: time="2024-12-13T01:55:31.857310607Z" level=info msg="RemovePodSandbox for \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\"" Dec 13 01:55:31.857471 containerd[2026]: time="2024-12-13T01:55:31.857390035Z" level=info msg="Forcibly stopping sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\"" Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.932 [WARNING][5900] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0", GenerateName:"calico-kube-controllers-9f6ff568d-", Namespace:"calico-system", SelfLink:"", UID:"98155941-7b7d-48f1-80f8-b0abe7a2cd77", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9f6ff568d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"beab08bb44c9a303fddc19a227b5e4f302a6b7806c6e8489c6e919f618ec48a6", Pod:"calico-kube-controllers-9f6ff568d-sz27l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia629fbd93cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.933 [INFO][5900] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.933 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" iface="eth0" netns="" Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.933 [INFO][5900] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.933 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.979 [INFO][5915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" HandleID="k8s-pod-network.d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.980 [INFO][5915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.980 [INFO][5915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.995 [WARNING][5915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" HandleID="k8s-pod-network.d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:31.996 [INFO][5915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" HandleID="k8s-pod-network.d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Workload="ip--172--31--19--221-k8s-calico--kube--controllers--9f6ff568d--sz27l-eth0" Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:32.001 [INFO][5915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:32.008789 containerd[2026]: 2024-12-13 01:55:32.005 [INFO][5900] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43" Dec 13 01:55:32.013190 containerd[2026]: time="2024-12-13T01:55:32.011620995Z" level=info msg="TearDown network for sandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\" successfully" Dec 13 01:55:32.022852 containerd[2026]: time="2024-12-13T01:55:32.022552503Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:32.023094 containerd[2026]: time="2024-12-13T01:55:32.022920303Z" level=info msg="RemovePodSandbox \"d7c7a0b2b3564317552e0827d12970501a30db15622390df37efc5cc02630e43\" returns successfully" Dec 13 01:55:32.026058 containerd[2026]: time="2024-12-13T01:55:32.024511395Z" level=info msg="StopPodSandbox for \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\"" Dec 13 01:55:32.123063 sshd[5852]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:32.136674 systemd[1]: sshd@11-172.31.19.221:22-139.178.68.195:54712.service: Deactivated successfully. Dec 13 01:55:32.144847 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:55:32.151315 systemd-logind[2004]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:55:32.155696 systemd-logind[2004]: Removed session 12. Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.148 [WARNING][5938] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44b75188-75ae-44a3-965d-98692905f7b3", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50", Pod:"csi-node-driver-j259d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali611c37eeb05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.149 [INFO][5938] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.149 [INFO][5938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" iface="eth0" netns="" Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.150 [INFO][5938] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.150 [INFO][5938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.196 [INFO][5946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" HandleID="k8s-pod-network.9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.196 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.196 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.209 [WARNING][5946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" HandleID="k8s-pod-network.9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.209 [INFO][5946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" HandleID="k8s-pod-network.9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.212 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:32.217380 containerd[2026]: 2024-12-13 01:55:32.215 [INFO][5938] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:32.217380 containerd[2026]: time="2024-12-13T01:55:32.217441540Z" level=info msg="TearDown network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\" successfully" Dec 13 01:55:32.217380 containerd[2026]: time="2024-12-13T01:55:32.217483732Z" level=info msg="StopPodSandbox for \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\" returns successfully" Dec 13 01:55:32.219495 containerd[2026]: time="2024-12-13T01:55:32.219060376Z" level=info msg="RemovePodSandbox for \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\"" Dec 13 01:55:32.219495 containerd[2026]: time="2024-12-13T01:55:32.219123172Z" level=info msg="Forcibly stopping sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\"" Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.287 [WARNING][5964] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44b75188-75ae-44a3-965d-98692905f7b3", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"b7cb549333723fc74777f287a1525025a429e0be9075f228a3d95c4d8c825d50", Pod:"csi-node-driver-j259d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali611c37eeb05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.288 [INFO][5964] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.288 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" iface="eth0" netns="" Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.288 [INFO][5964] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.288 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.329 [INFO][5971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" HandleID="k8s-pod-network.9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.329 [INFO][5971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.329 [INFO][5971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.342 [WARNING][5971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" HandleID="k8s-pod-network.9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.342 [INFO][5971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" HandleID="k8s-pod-network.9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Workload="ip--172--31--19--221-k8s-csi--node--driver--j259d-eth0" Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.345 [INFO][5971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:32.350781 containerd[2026]: 2024-12-13 01:55:32.348 [INFO][5964] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64" Dec 13 01:55:32.352296 containerd[2026]: time="2024-12-13T01:55:32.351961985Z" level=info msg="TearDown network for sandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\" successfully" Dec 13 01:55:32.359650 containerd[2026]: time="2024-12-13T01:55:32.359575241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:32.359829 containerd[2026]: time="2024-12-13T01:55:32.359703113Z" level=info msg="RemovePodSandbox \"9134b93cf8794a7e7d90099f021ec461aa4067ee38b693e4c0b2f54dc0580b64\" returns successfully" Dec 13 01:55:32.360368 containerd[2026]: time="2024-12-13T01:55:32.360307181Z" level=info msg="StopPodSandbox for \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\"" Dec 13 01:55:32.540961 systemd[1]: run-containerd-runc-k8s.io-163156bd938fe468f2db365b56738bbe0378239da02d7bd6e7965446f211d5aa-runc.9TlYW0.mount: Deactivated successfully. Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.438 [WARNING][5989] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0", GenerateName:"calico-apiserver-59b7bf6bf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"46e827e0-09e5-4459-853e-fe2f3072d9fa", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b7bf6bf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673", Pod:"calico-apiserver-59b7bf6bf4-wfgbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5facccc4bbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.439 [INFO][5989] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.439 [INFO][5989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" iface="eth0" netns="" Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.439 [INFO][5989] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.439 [INFO][5989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.488 [INFO][5995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" HandleID="k8s-pod-network.7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.489 [INFO][5995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.489 [INFO][5995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.510 [WARNING][5995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" HandleID="k8s-pod-network.7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.510 [INFO][5995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" HandleID="k8s-pod-network.7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.530 [INFO][5995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:32.560744 containerd[2026]: 2024-12-13 01:55:32.554 [INFO][5989] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:32.563722 containerd[2026]: time="2024-12-13T01:55:32.561004218Z" level=info msg="TearDown network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\" successfully" Dec 13 01:55:32.563722 containerd[2026]: time="2024-12-13T01:55:32.561050802Z" level=info msg="StopPodSandbox for \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\" returns successfully" Dec 13 01:55:32.563722 containerd[2026]: time="2024-12-13T01:55:32.561720642Z" level=info msg="RemovePodSandbox for \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\"" Dec 13 01:55:32.563722 containerd[2026]: time="2024-12-13T01:55:32.561772050Z" level=info msg="Forcibly stopping sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\"" Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.679 [WARNING][6030] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0", GenerateName:"calico-apiserver-59b7bf6bf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"46e827e0-09e5-4459-853e-fe2f3072d9fa", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b7bf6bf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"c8c45064106af55207dea16a3ed89146a6b0c99fee25427d1d45afe8c74da673", Pod:"calico-apiserver-59b7bf6bf4-wfgbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5facccc4bbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.679 [INFO][6030] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.680 [INFO][6030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" iface="eth0" netns="" Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.680 [INFO][6030] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.680 [INFO][6030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.743 [INFO][6041] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" HandleID="k8s-pod-network.7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.744 [INFO][6041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.744 [INFO][6041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.760 [WARNING][6041] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" HandleID="k8s-pod-network.7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.761 [INFO][6041] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" HandleID="k8s-pod-network.7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Workload="ip--172--31--19--221-k8s-calico--apiserver--59b7bf6bf4--wfgbb-eth0" Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.768 [INFO][6041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:32.777310 containerd[2026]: 2024-12-13 01:55:32.774 [INFO][6030] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660" Dec 13 01:55:32.778150 containerd[2026]: time="2024-12-13T01:55:32.777426535Z" level=info msg="TearDown network for sandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\" successfully" Dec 13 01:55:32.783915 containerd[2026]: time="2024-12-13T01:55:32.783843343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:32.784266 containerd[2026]: time="2024-12-13T01:55:32.783969967Z" level=info msg="RemovePodSandbox \"7659fa4a39c3c29a99d759081e7fcd53274266c8de1c184a8430b967d7492660\" returns successfully" Dec 13 01:55:32.785159 containerd[2026]: time="2024-12-13T01:55:32.785105743Z" level=info msg="StopPodSandbox for \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\"" Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.851 [WARNING][6063] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a10cc81f-5a1f-43e0-b9b3-7b3335bef263", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc", Pod:"coredns-6f6b679f8f-t284s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02dba7d584e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.852 [INFO][6063] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.852 [INFO][6063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" iface="eth0" netns="" Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.852 [INFO][6063] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.852 [INFO][6063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.886 [INFO][6069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" HandleID="k8s-pod-network.cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.886 [INFO][6069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.886 [INFO][6069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.899 [WARNING][6069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" HandleID="k8s-pod-network.cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.899 [INFO][6069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" HandleID="k8s-pod-network.cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.902 [INFO][6069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:32.907848 containerd[2026]: 2024-12-13 01:55:32.905 [INFO][6063] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:32.910695 containerd[2026]: time="2024-12-13T01:55:32.908719952Z" level=info msg="TearDown network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\" successfully" Dec 13 01:55:32.910695 containerd[2026]: time="2024-12-13T01:55:32.908762948Z" level=info msg="StopPodSandbox for \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\" returns successfully" Dec 13 01:55:32.910695 containerd[2026]: time="2024-12-13T01:55:32.909916508Z" level=info msg="RemovePodSandbox for \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\"" Dec 13 01:55:32.910695 containerd[2026]: time="2024-12-13T01:55:32.909967232Z" level=info msg="Forcibly stopping sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\"" Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:32.978 [WARNING][6087] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a10cc81f-5a1f-43e0-b9b3-7b3335bef263", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-221", ContainerID:"834faa6ecf8e6c0e7f54844e11d217b84e0af6c6c59a440ab31a50b711c6a8dc", Pod:"coredns-6f6b679f8f-t284s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02dba7d584e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:32.978 [INFO][6087] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:32.978 [INFO][6087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" iface="eth0" netns="" Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:32.978 [INFO][6087] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:32.978 [INFO][6087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:33.026 [INFO][6094] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" HandleID="k8s-pod-network.cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:33.027 [INFO][6094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:33.027 [INFO][6094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:33.040 [WARNING][6094] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" HandleID="k8s-pod-network.cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:33.040 [INFO][6094] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" HandleID="k8s-pod-network.cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Workload="ip--172--31--19--221-k8s-coredns--6f6b679f8f--t284s-eth0" Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:33.043 [INFO][6094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:33.049200 containerd[2026]: 2024-12-13 01:55:33.045 [INFO][6087] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062" Dec 13 01:55:33.049200 containerd[2026]: time="2024-12-13T01:55:33.048645641Z" level=info msg="TearDown network for sandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\" successfully" Dec 13 01:55:33.057420 containerd[2026]: time="2024-12-13T01:55:33.057093413Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:33.057420 containerd[2026]: time="2024-12-13T01:55:33.057198473Z" level=info msg="RemovePodSandbox \"cae292f21b3505991beb0a85e02a0fe283806f6e8624663c6fb8b2f306458062\" returns successfully" Dec 13 01:55:35.672952 kubelet[3338]: I1213 01:55:35.672690 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:37.163027 systemd[1]: Started sshd@12-172.31.19.221:22-139.178.68.195:36000.service - OpenSSH per-connection server daemon (139.178.68.195:36000). Dec 13 01:55:37.336491 sshd[6105]: Accepted publickey for core from 139.178.68.195 port 36000 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:37.340104 sshd[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:37.348959 systemd-logind[2004]: New session 13 of user core. Dec 13 01:55:37.355870 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:55:37.623333 sshd[6105]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:37.629766 systemd[1]: sshd@12-172.31.19.221:22-139.178.68.195:36000.service: Deactivated successfully. Dec 13 01:55:37.635243 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:55:37.636656 systemd-logind[2004]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:55:37.638541 systemd-logind[2004]: Removed session 13. Dec 13 01:55:37.673101 systemd[1]: Started sshd@13-172.31.19.221:22-139.178.68.195:36014.service - OpenSSH per-connection server daemon (139.178.68.195:36014). Dec 13 01:55:37.854274 sshd[6121]: Accepted publickey for core from 139.178.68.195 port 36014 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:37.857238 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:37.867937 systemd-logind[2004]: New session 14 of user core. Dec 13 01:55:37.872812 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:55:38.194192 sshd[6121]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:38.204367 systemd[1]: sshd@13-172.31.19.221:22-139.178.68.195:36014.service: Deactivated successfully. Dec 13 01:55:38.214095 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:55:38.216886 systemd-logind[2004]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:55:38.244815 systemd[1]: Started sshd@14-172.31.19.221:22-139.178.68.195:36028.service - OpenSSH per-connection server daemon (139.178.68.195:36028). Dec 13 01:55:38.247991 systemd-logind[2004]: Removed session 14. Dec 13 01:55:38.428901 sshd[6132]: Accepted publickey for core from 139.178.68.195 port 36028 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:38.432587 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:38.445036 systemd-logind[2004]: New session 15 of user core. Dec 13 01:55:38.452994 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:55:38.753808 sshd[6132]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:38.763438 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:55:38.763749 systemd-logind[2004]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:55:38.769608 systemd[1]: sshd@14-172.31.19.221:22-139.178.68.195:36028.service: Deactivated successfully. Dec 13 01:55:38.779032 systemd-logind[2004]: Removed session 15. Dec 13 01:55:43.800456 systemd[1]: Started sshd@15-172.31.19.221:22-139.178.68.195:36032.service - OpenSSH per-connection server daemon (139.178.68.195:36032). Dec 13 01:55:44.002069 sshd[6151]: Accepted publickey for core from 139.178.68.195 port 36032 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:44.006114 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:44.016111 systemd-logind[2004]: New session 16 of user core. Dec 13 01:55:44.021798 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:55:44.283941 sshd[6151]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:44.289891 systemd-logind[2004]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:55:44.291159 systemd[1]: sshd@15-172.31.19.221:22-139.178.68.195:36032.service: Deactivated successfully. Dec 13 01:55:44.296620 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:55:44.304109 systemd-logind[2004]: Removed session 16. Dec 13 01:55:49.327113 systemd[1]: Started sshd@16-172.31.19.221:22-139.178.68.195:38148.service - OpenSSH per-connection server daemon (139.178.68.195:38148). Dec 13 01:55:49.531961 sshd[6182]: Accepted publickey for core from 139.178.68.195 port 38148 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:49.533195 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:49.548666 systemd-logind[2004]: New session 17 of user core. Dec 13 01:55:49.555114 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:55:49.824100 sshd[6182]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:49.833474 systemd[1]: sshd@16-172.31.19.221:22-139.178.68.195:38148.service: Deactivated successfully. Dec 13 01:55:49.839423 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:55:49.841666 systemd-logind[2004]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:55:49.844819 systemd-logind[2004]: Removed session 17. Dec 13 01:55:54.873100 systemd[1]: Started sshd@17-172.31.19.221:22-139.178.68.195:38162.service - OpenSSH per-connection server daemon (139.178.68.195:38162). Dec 13 01:55:55.073639 sshd[6225]: Accepted publickey for core from 139.178.68.195 port 38162 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:55.077177 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:55.089680 systemd-logind[2004]: New session 18 of user core. Dec 13 01:55:55.099939 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:55:55.381864 sshd[6225]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:55.392379 systemd[1]: sshd@17-172.31.19.221:22-139.178.68.195:38162.service: Deactivated successfully. Dec 13 01:55:55.393296 systemd-logind[2004]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:55:55.400014 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:55:55.405971 systemd-logind[2004]: Removed session 18. Dec 13 01:56:00.425235 systemd[1]: Started sshd@18-172.31.19.221:22-139.178.68.195:38914.service - OpenSSH per-connection server daemon (139.178.68.195:38914). Dec 13 01:56:00.617337 sshd[6240]: Accepted publickey for core from 139.178.68.195 port 38914 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:00.620475 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:00.631156 systemd-logind[2004]: New session 19 of user core. Dec 13 01:56:00.635010 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:56:00.927950 sshd[6240]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:00.938388 systemd[1]: sshd@18-172.31.19.221:22-139.178.68.195:38914.service: Deactivated successfully. Dec 13 01:56:00.942460 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:56:00.945410 systemd-logind[2004]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:56:00.966070 systemd[1]: Started sshd@19-172.31.19.221:22-139.178.68.195:38920.service - OpenSSH per-connection server daemon (139.178.68.195:38920). Dec 13 01:56:00.968230 systemd-logind[2004]: Removed session 19. Dec 13 01:56:01.158340 sshd[6252]: Accepted publickey for core from 139.178.68.195 port 38920 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:01.161283 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:01.169870 systemd-logind[2004]: New session 20 of user core. Dec 13 01:56:01.178908 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:56:01.801368 sshd[6252]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:01.807727 systemd[1]: sshd@19-172.31.19.221:22-139.178.68.195:38920.service: Deactivated successfully. Dec 13 01:56:01.813378 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:56:01.819012 systemd-logind[2004]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:56:01.840347 systemd-logind[2004]: Removed session 20. Dec 13 01:56:01.850797 systemd[1]: Started sshd@20-172.31.19.221:22-139.178.68.195:38934.service - OpenSSH per-connection server daemon (139.178.68.195:38934). Dec 13 01:56:02.060303 sshd[6263]: Accepted publickey for core from 139.178.68.195 port 38934 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:02.066142 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:02.077228 systemd-logind[2004]: New session 21 of user core. Dec 13 01:56:02.086084 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:56:05.446339 sshd[6263]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:05.457946 systemd[1]: sshd@20-172.31.19.221:22-139.178.68.195:38934.service: Deactivated successfully. Dec 13 01:56:05.462720 systemd-logind[2004]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:56:05.467014 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:56:05.470435 systemd[1]: session-21.scope: Consumed 1.179s CPU time. Dec 13 01:56:05.494573 systemd-logind[2004]: Removed session 21. Dec 13 01:56:05.504289 systemd[1]: Started sshd@21-172.31.19.221:22-139.178.68.195:38946.service - OpenSSH per-connection server daemon (139.178.68.195:38946). Dec 13 01:56:05.700946 sshd[6301]: Accepted publickey for core from 139.178.68.195 port 38946 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:05.704575 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:05.713476 systemd-logind[2004]: New session 22 of user core. Dec 13 01:56:05.720791 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:56:06.239452 sshd[6301]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:06.249473 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:56:06.255608 systemd[1]: sshd@21-172.31.19.221:22-139.178.68.195:38946.service: Deactivated successfully. Dec 13 01:56:06.266273 systemd-logind[2004]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:56:06.299181 systemd[1]: Started sshd@22-172.31.19.221:22-139.178.68.195:41048.service - OpenSSH per-connection server daemon (139.178.68.195:41048). Dec 13 01:56:06.301617 systemd-logind[2004]: Removed session 22. Dec 13 01:56:06.489313 sshd[6312]: Accepted publickey for core from 139.178.68.195 port 41048 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:06.492011 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:06.502086 systemd-logind[2004]: New session 23 of user core. Dec 13 01:56:06.507802 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:56:06.763204 sshd[6312]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:06.770325 systemd[1]: sshd@22-172.31.19.221:22-139.178.68.195:41048.service: Deactivated successfully. Dec 13 01:56:06.770400 systemd-logind[2004]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:56:06.777388 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:56:06.781124 systemd-logind[2004]: Removed session 23. Dec 13 01:56:11.808166 systemd[1]: Started sshd@23-172.31.19.221:22-139.178.68.195:41062.service - OpenSSH per-connection server daemon (139.178.68.195:41062). Dec 13 01:56:11.986777 sshd[6327]: Accepted publickey for core from 139.178.68.195 port 41062 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:11.991128 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:12.009173 systemd-logind[2004]: New session 24 of user core. Dec 13 01:56:12.018151 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:56:12.260497 sshd[6327]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:12.265813 systemd[1]: sshd@23-172.31.19.221:22-139.178.68.195:41062.service: Deactivated successfully. Dec 13 01:56:12.270500 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:56:12.274313 systemd-logind[2004]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:56:12.276115 systemd-logind[2004]: Removed session 24. Dec 13 01:56:17.306116 systemd[1]: Started sshd@24-172.31.19.221:22-139.178.68.195:43586.service - OpenSSH per-connection server daemon (139.178.68.195:43586). Dec 13 01:56:17.472579 sshd[6343]: Accepted publickey for core from 139.178.68.195 port 43586 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:17.475305 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:17.483964 systemd-logind[2004]: New session 25 of user core. Dec 13 01:56:17.493046 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:56:17.754973 sshd[6343]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:17.761503 systemd[1]: sshd@24-172.31.19.221:22-139.178.68.195:43586.service: Deactivated successfully. Dec 13 01:56:17.767118 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:56:17.769393 systemd-logind[2004]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:56:17.771360 systemd-logind[2004]: Removed session 25. Dec 13 01:56:22.797002 systemd[1]: Started sshd@25-172.31.19.221:22-139.178.68.195:43596.service - OpenSSH per-connection server daemon (139.178.68.195:43596). Dec 13 01:56:22.989702 sshd[6378]: Accepted publickey for core from 139.178.68.195 port 43596 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:22.993742 sshd[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:23.010757 systemd-logind[2004]: New session 26 of user core. Dec 13 01:56:23.019881 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:56:23.318006 sshd[6378]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:23.325218 systemd[1]: sshd@25-172.31.19.221:22-139.178.68.195:43596.service: Deactivated successfully. Dec 13 01:56:23.330503 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:56:23.337893 systemd-logind[2004]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:56:23.341332 systemd-logind[2004]: Removed session 26. Dec 13 01:56:28.360085 systemd[1]: Started sshd@26-172.31.19.221:22-139.178.68.195:37952.service - OpenSSH per-connection server daemon (139.178.68.195:37952). Dec 13 01:56:28.538895 sshd[6391]: Accepted publickey for core from 139.178.68.195 port 37952 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:28.542285 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:28.552363 systemd-logind[2004]: New session 27 of user core. Dec 13 01:56:28.557824 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:56:28.799484 sshd[6391]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:28.807385 systemd[1]: sshd@26-172.31.19.221:22-139.178.68.195:37952.service: Deactivated successfully. Dec 13 01:56:28.812982 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:56:28.815385 systemd-logind[2004]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:56:28.817863 systemd-logind[2004]: Removed session 27. Dec 13 01:56:33.848148 systemd[1]: Started sshd@27-172.31.19.221:22-139.178.68.195:37956.service - OpenSSH per-connection server daemon (139.178.68.195:37956). Dec 13 01:56:34.029257 sshd[6431]: Accepted publickey for core from 139.178.68.195 port 37956 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:34.031992 sshd[6431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:34.042954 systemd-logind[2004]: New session 28 of user core. Dec 13 01:56:34.048803 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:56:34.288799 sshd[6431]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:34.296641 systemd-logind[2004]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:56:34.297728 systemd[1]: sshd@27-172.31.19.221:22-139.178.68.195:37956.service: Deactivated successfully. Dec 13 01:56:34.304498 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:56:34.310273 systemd-logind[2004]: Removed session 28. Dec 13 01:56:39.332051 systemd[1]: Started sshd@28-172.31.19.221:22-139.178.68.195:59230.service - OpenSSH per-connection server daemon (139.178.68.195:59230). Dec 13 01:56:39.510012 sshd[6448]: Accepted publickey for core from 139.178.68.195 port 59230 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:39.513079 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:39.522904 systemd-logind[2004]: New session 29 of user core. Dec 13 01:56:39.528851 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:56:39.780170 sshd[6448]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:39.786449 systemd[1]: sshd@28-172.31.19.221:22-139.178.68.195:59230.service: Deactivated successfully. Dec 13 01:56:39.791097 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:56:39.792961 systemd-logind[2004]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:56:39.795452 systemd-logind[2004]: Removed session 29. Dec 13 01:56:44.822445 systemd[1]: Started sshd@29-172.31.19.221:22-139.178.68.195:59234.service - OpenSSH per-connection server daemon (139.178.68.195:59234). Dec 13 01:56:44.998095 sshd[6478]: Accepted publickey for core from 139.178.68.195 port 59234 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:45.002854 sshd[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:45.012503 systemd-logind[2004]: New session 30 of user core. Dec 13 01:56:45.021823 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 01:56:45.268756 sshd[6478]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:45.277301 systemd-logind[2004]: Session 30 logged out. Waiting for processes to exit. Dec 13 01:56:45.278445 systemd[1]: sshd@29-172.31.19.221:22-139.178.68.195:59234.service: Deactivated successfully. Dec 13 01:56:45.283038 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 01:56:45.285791 systemd-logind[2004]: Removed session 30.