Feb 13 19:48:39.166604 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083]
Feb 13 19:48:39.166650 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025
Feb 13 19:48:39.166675 kernel: KASLR disabled due to lack of seed
Feb 13 19:48:39.166692 kernel: efi: EFI v2.7 by EDK II
Feb 13 19:48:39.166708 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 
Feb 13 19:48:39.166724 kernel: ACPI: Early table checksum verification disabled
Feb 13 19:48:39.166742 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON)
Feb 13 19:48:39.166758 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001      01000013)
Feb 13 19:48:39.166774 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001)
Feb 13 19:48:39.166789 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527)
Feb 13 19:48:39.166810 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Feb 13 19:48:39.166826 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001)
Feb 13 19:48:39.166841 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001)
Feb 13 19:48:39.166857 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001)
Feb 13 19:48:39.166875 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Feb 13 19:48:39.166896 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001)
Feb 13 19:48:39.166913 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001)
Feb 13 19:48:39.166929 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200
Feb 13 19:48:39.166946 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200')
Feb 13 19:48:39.166962 kernel: printk: bootconsole [uart0] enabled
Feb 13 19:48:39.166978 kernel: NUMA: Failed to initialise from firmware
Feb 13 19:48:39.166995 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff]
Feb 13 19:48:39.167011 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff]
Feb 13 19:48:39.167027 kernel: Zone ranges:
Feb 13 19:48:39.167043 kernel:   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
Feb 13 19:48:39.167059 kernel:   DMA32    empty
Feb 13 19:48:39.167079 kernel:   Normal   [mem 0x0000000100000000-0x00000004b5ffffff]
Feb 13 19:48:39.167096 kernel: Movable zone start for each node
Feb 13 19:48:39.167112 kernel: Early memory node ranges
Feb 13 19:48:39.167128 kernel:   node   0: [mem 0x0000000040000000-0x000000007862ffff]
Feb 13 19:48:39.167145 kernel:   node   0: [mem 0x0000000078630000-0x000000007863ffff]
Feb 13 19:48:39.167161 kernel:   node   0: [mem 0x0000000078640000-0x00000000786effff]
Feb 13 19:48:39.167177 kernel:   node   0: [mem 0x00000000786f0000-0x000000007872ffff]
Feb 13 19:48:39.167193 kernel:   node   0: [mem 0x0000000078730000-0x000000007bbfffff]
Feb 13 19:48:39.167209 kernel:   node   0: [mem 0x000000007bc00000-0x000000007bfdffff]
Feb 13 19:48:39.167226 kernel:   node   0: [mem 0x000000007bfe0000-0x000000007fffffff]
Feb 13 19:48:39.167268 kernel:   node   0: [mem 0x0000000400000000-0x00000004b5ffffff]
Feb 13 19:48:39.167288 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff]
Feb 13 19:48:39.167312 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges
Feb 13 19:48:39.167330 kernel: psci: probing for conduit method from ACPI.
Feb 13 19:48:39.167354 kernel: psci: PSCIv1.0 detected in firmware.
Feb 13 19:48:39.167372 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 19:48:39.167390 kernel: psci: Trusted OS migration not required
Feb 13 19:48:39.167412 kernel: psci: SMC Calling Convention v1.1
Feb 13 19:48:39.167430 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 19:48:39.167447 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 19:48:39.167465 kernel: pcpu-alloc: [0] 0 [0] 1 
Feb 13 19:48:39.167482 kernel: Detected PIPT I-cache on CPU0
Feb 13 19:48:39.167500 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 19:48:39.167518 kernel: CPU features: detected: Spectre-v2
Feb 13 19:48:39.167535 kernel: CPU features: detected: Spectre-v3a
Feb 13 19:48:39.167553 kernel: CPU features: detected: Spectre-BHB
Feb 13 19:48:39.167570 kernel: CPU features: detected: ARM erratum 1742098
Feb 13 19:48:39.167588 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923
Feb 13 19:48:39.167617 kernel: alternatives: applying boot alternatives
Feb 13 19:48:39.167638 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7
Feb 13 19:48:39.167657 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 19:48:39.167674 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 19:48:39.167692 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 19:48:39.167709 kernel: Fallback order for Node 0: 0 
Feb 13 19:48:39.167727 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 991872
Feb 13 19:48:39.167744 kernel: Policy zone: Normal
Feb 13 19:48:39.167762 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 19:48:39.167779 kernel: software IO TLB: area num 2.
Feb 13 19:48:39.167797 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB)
Feb 13 19:48:39.167820 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved)
Feb 13 19:48:39.167838 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 19:48:39.167855 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 19:48:39.167873 kernel: rcu:         RCU event tracing is enabled.
Feb 13 19:48:39.167892 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 19:48:39.167909 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 19:48:39.167928 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 19:48:39.167945 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 19:48:39.167963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 19:48:39.167995 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 19:48:39.168020 kernel: GICv3: 96 SPIs implemented
Feb 13 19:48:39.168045 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 19:48:39.168062 kernel: Root IRQ handler: gic_handle_irq
Feb 13 19:48:39.168080 kernel: GICv3: GICv3 features: 16 PPIs
Feb 13 19:48:39.168097 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000
Feb 13 19:48:39.168115 kernel: ITS [mem 0x10080000-0x1009ffff]
Feb 13 19:48:39.168132 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1)
Feb 13 19:48:39.168150 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1)
Feb 13 19:48:39.168168 kernel: GICv3: using LPI property table @0x00000004000d0000
Feb 13 19:48:39.168185 kernel: ITS: Using hypervisor restricted LPI range [128]
Feb 13 19:48:39.168202 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000
Feb 13 19:48:39.168220 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 19:48:39.168253 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt).
Feb 13 19:48:39.168311 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns
Feb 13 19:48:39.168330 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns
Feb 13 19:48:39.168348 kernel: Console: colour dummy device 80x25
Feb 13 19:48:39.168366 kernel: printk: console [tty1] enabled
Feb 13 19:48:39.168384 kernel: ACPI: Core revision 20230628
Feb 13 19:48:39.168402 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333)
Feb 13 19:48:39.168421 kernel: pid_max: default: 32768 minimum: 301
Feb 13 19:48:39.168439 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 19:48:39.168456 kernel: landlock: Up and running.
Feb 13 19:48:39.168480 kernel: SELinux:  Initializing.
Feb 13 19:48:39.168499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 19:48:39.171098 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 19:48:39.171143 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 19:48:39.171162 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 19:48:39.171180 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 19:48:39.171199 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 19:48:39.171217 kernel: Platform MSI: ITS@0x10080000 domain created
Feb 13 19:48:39.171236 kernel: PCI/MSI: ITS@0x10080000 domain created
Feb 13 19:48:39.171303 kernel: Remapping and enabling EFI services.
Feb 13 19:48:39.171323 kernel: smp: Bringing up secondary CPUs ...
Feb 13 19:48:39.171342 kernel: Detected PIPT I-cache on CPU1
Feb 13 19:48:39.171360 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000
Feb 13 19:48:39.171379 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000
Feb 13 19:48:39.171397 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083]
Feb 13 19:48:39.171415 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 19:48:39.171433 kernel: SMP: Total of 2 processors activated.
Feb 13 19:48:39.171451 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 19:48:39.171475 kernel: CPU features: detected: 32-bit EL1 Support
Feb 13 19:48:39.171494 kernel: CPU features: detected: CRC32 instructions
Feb 13 19:48:39.171512 kernel: CPU: All CPU(s) started at EL1
Feb 13 19:48:39.171543 kernel: alternatives: applying system-wide alternatives
Feb 13 19:48:39.171565 kernel: devtmpfs: initialized
Feb 13 19:48:39.171585 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 19:48:39.171603 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 19:48:39.171622 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 19:48:39.171640 kernel: SMBIOS 3.0.0 present.
Feb 13 19:48:39.171659 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018
Feb 13 19:48:39.171681 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 19:48:39.171700 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 19:48:39.171719 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 19:48:39.171738 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 19:48:39.171756 kernel: audit: initializing netlink subsys (disabled)
Feb 13 19:48:39.171775 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1
Feb 13 19:48:39.171795 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 19:48:39.171818 kernel: cpuidle: using governor menu
Feb 13 19:48:39.171837 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 19:48:39.171855 kernel: ASID allocator initialised with 65536 entries
Feb 13 19:48:39.171874 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 19:48:39.171893 kernel: Serial: AMBA PL011 UART driver
Feb 13 19:48:39.171912 kernel: Modules: 17520 pages in range for non-PLT usage
Feb 13 19:48:39.171930 kernel: Modules: 509040 pages in range for PLT usage
Feb 13 19:48:39.171949 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 19:48:39.171968 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 19:48:39.172011 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 19:48:39.172033 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 19:48:39.172053 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 19:48:39.172072 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 19:48:39.172091 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 19:48:39.172109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 19:48:39.172128 kernel: ACPI: Added _OSI(Module Device)
Feb 13 19:48:39.172146 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 19:48:39.172165 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 19:48:39.172189 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 19:48:39.172208 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 19:48:39.172227 kernel: ACPI: Interpreter enabled
Feb 13 19:48:39.172266 kernel: ACPI: Using GIC for interrupt routing
Feb 13 19:48:39.172287 kernel: ACPI: MCFG table detected, 1 entries
Feb 13 19:48:39.172306 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f])
Feb 13 19:48:39.172603 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 19:48:39.172818 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb 13 19:48:39.173022 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb 13 19:48:39.173451 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00
Feb 13 19:48:39.173694 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f]
Feb 13 19:48:39.173722 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io  0x0000-0xffff window]
Feb 13 19:48:39.173742 kernel: acpiphp: Slot [1] registered
Feb 13 19:48:39.173761 kernel: acpiphp: Slot [2] registered
Feb 13 19:48:39.173780 kernel: acpiphp: Slot [3] registered
Feb 13 19:48:39.173798 kernel: acpiphp: Slot [4] registered
Feb 13 19:48:39.173825 kernel: acpiphp: Slot [5] registered
Feb 13 19:48:39.173845 kernel: acpiphp: Slot [6] registered
Feb 13 19:48:39.173863 kernel: acpiphp: Slot [7] registered
Feb 13 19:48:39.173881 kernel: acpiphp: Slot [8] registered
Feb 13 19:48:39.173899 kernel: acpiphp: Slot [9] registered
Feb 13 19:48:39.173918 kernel: acpiphp: Slot [10] registered
Feb 13 19:48:39.173936 kernel: acpiphp: Slot [11] registered
Feb 13 19:48:39.173955 kernel: acpiphp: Slot [12] registered
Feb 13 19:48:39.173973 kernel: acpiphp: Slot [13] registered
Feb 13 19:48:39.173991 kernel: acpiphp: Slot [14] registered
Feb 13 19:48:39.174014 kernel: acpiphp: Slot [15] registered
Feb 13 19:48:39.174032 kernel: acpiphp: Slot [16] registered
Feb 13 19:48:39.174051 kernel: acpiphp: Slot [17] registered
Feb 13 19:48:39.174069 kernel: acpiphp: Slot [18] registered
Feb 13 19:48:39.174087 kernel: acpiphp: Slot [19] registered
Feb 13 19:48:39.174106 kernel: acpiphp: Slot [20] registered
Feb 13 19:48:39.174125 kernel: acpiphp: Slot [21] registered
Feb 13 19:48:39.174143 kernel: acpiphp: Slot [22] registered
Feb 13 19:48:39.174161 kernel: acpiphp: Slot [23] registered
Feb 13 19:48:39.174184 kernel: acpiphp: Slot [24] registered
Feb 13 19:48:39.174203 kernel: acpiphp: Slot [25] registered
Feb 13 19:48:39.174221 kernel: acpiphp: Slot [26] registered
Feb 13 19:48:39.174260 kernel: acpiphp: Slot [27] registered
Feb 13 19:48:39.174350 kernel: acpiphp: Slot [28] registered
Feb 13 19:48:39.174373 kernel: acpiphp: Slot [29] registered
Feb 13 19:48:39.174391 kernel: acpiphp: Slot [30] registered
Feb 13 19:48:39.174410 kernel: acpiphp: Slot [31] registered
Feb 13 19:48:39.174429 kernel: PCI host bridge to bus 0000:00
Feb 13 19:48:39.176124 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window]
Feb 13 19:48:39.176370 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb 13 19:48:39.176563 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window]
Feb 13 19:48:39.176746 kernel: pci_bus 0000:00: root bus resource [bus 00-0f]
Feb 13 19:48:39.177000 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000
Feb 13 19:48:39.177333 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003
Feb 13 19:48:39.177570 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff]
Feb 13 19:48:39.177805 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Feb 13 19:48:39.178018 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff]
Feb 13 19:48:39.178232 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold
Feb 13 19:48:39.178985 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Feb 13 19:48:39.179198 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff]
Feb 13 19:48:39.179821 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref]
Feb 13 19:48:39.180115 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff]
Feb 13 19:48:39.180368 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold
Feb 13 19:48:39.180572 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref]
Feb 13 19:48:39.180775 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff]
Feb 13 19:48:39.181000 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff]
Feb 13 19:48:39.181207 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff]
Feb 13 19:48:39.181852 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff]
Feb 13 19:48:39.182052 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window]
Feb 13 19:48:39.182265 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb 13 19:48:39.182511 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window]
Feb 13 19:48:39.182539 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb 13 19:48:39.182559 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb 13 19:48:39.182578 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb 13 19:48:39.182597 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb 13 19:48:39.182616 kernel: iommu: Default domain type: Translated
Feb 13 19:48:39.182635 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 19:48:39.183365 kernel: efivars: Registered efivars operations
Feb 13 19:48:39.183386 kernel: vgaarb: loaded
Feb 13 19:48:39.183405 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 19:48:39.183424 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 19:48:39.183443 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 19:48:39.183462 kernel: pnp: PnP ACPI init
Feb 13 19:48:39.183720 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved
Feb 13 19:48:39.183750 kernel: pnp: PnP ACPI: found 1 devices
Feb 13 19:48:39.183777 kernel: NET: Registered PF_INET protocol family
Feb 13 19:48:39.183797 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 19:48:39.183816 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 19:48:39.183835 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 19:48:39.183853 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 19:48:39.183872 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 19:48:39.183891 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 19:48:39.183910 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 19:48:39.183929 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 19:48:39.183953 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 19:48:39.183972 kernel: PCI: CLS 0 bytes, default 64
Feb 13 19:48:39.184012 kernel: kvm [1]: HYP mode not available
Feb 13 19:48:39.184033 kernel: Initialise system trusted keyrings
Feb 13 19:48:39.184053 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 19:48:39.184072 kernel: Key type asymmetric registered
Feb 13 19:48:39.184090 kernel: Asymmetric key parser 'x509' registered
Feb 13 19:48:39.184109 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 19:48:39.184128 kernel: io scheduler mq-deadline registered
Feb 13 19:48:39.184153 kernel: io scheduler kyber registered
Feb 13 19:48:39.184172 kernel: io scheduler bfq registered
Feb 13 19:48:39.185498 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered
Feb 13 19:48:39.185543 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb 13 19:48:39.185563 kernel: ACPI: button: Power Button [PWRB]
Feb 13 19:48:39.185584 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1
Feb 13 19:48:39.185603 kernel: ACPI: button: Sleep Button [SLPB]
Feb 13 19:48:39.185622 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 19:48:39.185653 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37
Feb 13 19:48:39.185900 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012)
Feb 13 19:48:39.185932 kernel: printk: console [ttyS0] disabled
Feb 13 19:48:39.185952 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A
Feb 13 19:48:39.185972 kernel: printk: console [ttyS0] enabled
Feb 13 19:48:39.185992 kernel: printk: bootconsole [uart0] disabled
Feb 13 19:48:39.186011 kernel: thunder_xcv, ver 1.0
Feb 13 19:48:39.186030 kernel: thunder_bgx, ver 1.0
Feb 13 19:48:39.186051 kernel: nicpf, ver 1.0
Feb 13 19:48:39.186078 kernel: nicvf, ver 1.0
Feb 13 19:48:39.186394 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 19:48:39.186597 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:48:38 UTC (1739476118)
Feb 13 19:48:39.186624 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 19:48:39.186644 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available
Feb 13 19:48:39.186663 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 19:48:39.186682 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 19:48:39.186701 kernel: NET: Registered PF_INET6 protocol family
Feb 13 19:48:39.186726 kernel: Segment Routing with IPv6
Feb 13 19:48:39.186745 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 19:48:39.186764 kernel: NET: Registered PF_PACKET protocol family
Feb 13 19:48:39.186782 kernel: Key type dns_resolver registered
Feb 13 19:48:39.186816 kernel: registered taskstats version 1
Feb 13 19:48:39.186837 kernel: Loading compiled-in X.509 certificates
Feb 13 19:48:39.186856 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec'
Feb 13 19:48:39.186874 kernel: Key type .fscrypt registered
Feb 13 19:48:39.186893 kernel: Key type fscrypt-provisioning registered
Feb 13 19:48:39.186917 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 19:48:39.186936 kernel: ima: Allocated hash algorithm: sha1
Feb 13 19:48:39.186955 kernel: ima: No architecture policies found
Feb 13 19:48:39.186973 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 19:48:39.186992 kernel: clk: Disabling unused clocks
Feb 13 19:48:39.187010 kernel: Freeing unused kernel memory: 39360K
Feb 13 19:48:39.187029 kernel: Run /init as init process
Feb 13 19:48:39.187047 kernel:   with arguments:
Feb 13 19:48:39.187065 kernel:     /init
Feb 13 19:48:39.187083 kernel:   with environment:
Feb 13 19:48:39.187106 kernel:     HOME=/
Feb 13 19:48:39.187124 kernel:     TERM=linux
Feb 13 19:48:39.187142 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 19:48:39.187165 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 19:48:39.187189 systemd[1]: Detected virtualization amazon.
Feb 13 19:48:39.187209 systemd[1]: Detected architecture arm64.
Feb 13 19:48:39.187229 systemd[1]: Running in initrd.
Feb 13 19:48:39.189349 systemd[1]: No hostname configured, using default hostname.
Feb 13 19:48:39.189382 systemd[1]: Hostname set to <localhost>.
Feb 13 19:48:39.189404 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 19:48:39.189425 systemd[1]: Queued start job for default target initrd.target.
Feb 13 19:48:39.189446 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 19:48:39.189467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 19:48:39.189489 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 19:48:39.189510 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 19:48:39.189537 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 19:48:39.189558 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 19:48:39.189583 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 19:48:39.189605 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 19:48:39.189626 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 19:48:39.189647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 19:48:39.189667 systemd[1]: Reached target paths.target - Path Units.
Feb 13 19:48:39.189693 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 19:48:39.189713 systemd[1]: Reached target swap.target - Swaps.
Feb 13 19:48:39.189733 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 19:48:39.189754 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 19:48:39.189775 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 19:48:39.189796 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 19:48:39.189816 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 19:48:39.189837 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 19:48:39.189857 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 19:48:39.189883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 19:48:39.189903 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 19:48:39.189924 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 19:48:39.189944 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 19:48:39.189965 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 19:48:39.189986 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 19:48:39.190008 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 19:48:39.190029 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 19:48:39.190056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 19:48:39.190077 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 19:48:39.190164 systemd-journald[251]: Collecting audit messages is disabled.
Feb 13 19:48:39.190214 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 19:48:39.191363 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 19:48:39.191410 systemd-journald[251]: Journal started
Feb 13 19:48:39.191452 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2f22deea7df03c407779309475a130) is 8.0M, max 75.3M, 67.3M free.
Feb 13 19:48:39.173640 systemd-modules-load[252]: Inserted module 'overlay'
Feb 13 19:48:39.209813 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 19:48:39.209861 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 19:48:39.204513 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 19:48:39.224453 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 19:48:39.225626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 19:48:39.245989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 19:48:39.261501 kernel: Bridge firewalling registered
Feb 13 19:48:39.250364 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 19:48:39.254824 systemd-modules-load[252]: Inserted module 'br_netfilter'
Feb 13 19:48:39.257301 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 19:48:39.266359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 19:48:39.272496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 19:48:39.294436 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 19:48:39.318582 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 19:48:39.323619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 19:48:39.331525 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 19:48:39.342678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 19:48:39.355527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 19:48:39.379128 dracut-cmdline[284]: dracut-dracut-053
Feb 13 19:48:39.388449 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7
Feb 13 19:48:39.453166 systemd-resolved[288]: Positive Trust Anchors:
Feb 13 19:48:39.453201 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 19:48:39.453283 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 19:48:39.531277 kernel: SCSI subsystem initialized
Feb 13 19:48:39.539278 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 19:48:39.552290 kernel: iscsi: registered transport (tcp)
Feb 13 19:48:39.573920 kernel: iscsi: registered transport (qla4xxx)
Feb 13 19:48:39.573994 kernel: QLogic iSCSI HBA Driver
Feb 13 19:48:39.665358 kernel: random: crng init done
Feb 13 19:48:39.665433 systemd-resolved[288]: Defaulting to hostname 'linux'.
Feb 13 19:48:39.668805 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 19:48:39.672905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 19:48:39.693542 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 19:48:39.703574 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 19:48:39.743383 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 19:48:39.743457 kernel: device-mapper: uevent: version 1.0.3
Feb 13 19:48:39.745183 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 19:48:39.810284 kernel: raid6: neonx8   gen()  6744 MB/s
Feb 13 19:48:39.827273 kernel: raid6: neonx4   gen()  6562 MB/s
Feb 13 19:48:39.844272 kernel: raid6: neonx2   gen()  5479 MB/s
Feb 13 19:48:39.861272 kernel: raid6: neonx1   gen()  3968 MB/s
Feb 13 19:48:39.878272 kernel: raid6: int64x8  gen()  3824 MB/s
Feb 13 19:48:39.895272 kernel: raid6: int64x4  gen()  3704 MB/s
Feb 13 19:48:39.912279 kernel: raid6: int64x2  gen()  3613 MB/s
Feb 13 19:48:39.930029 kernel: raid6: int64x1  gen()  2764 MB/s
Feb 13 19:48:39.930061 kernel: raid6: using algorithm neonx8 gen() 6744 MB/s
Feb 13 19:48:39.948028 kernel: raid6: .... xor() 4822 MB/s, rmw enabled
Feb 13 19:48:39.948064 kernel: raid6: using neon recovery algorithm
Feb 13 19:48:39.955276 kernel: xor: measuring software checksum speed
Feb 13 19:48:39.957373 kernel:    8regs           : 10219 MB/sec
Feb 13 19:48:39.957412 kernel:    32regs          : 11913 MB/sec
Feb 13 19:48:39.958526 kernel:    arm64_neon      :  9590 MB/sec
Feb 13 19:48:39.958562 kernel: xor: using function: 32regs (11913 MB/sec)
Feb 13 19:48:40.042292 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 19:48:40.060907 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 19:48:40.069490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 19:48:40.109196 systemd-udevd[470]: Using default interface naming scheme 'v255'.
Feb 13 19:48:40.117944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 19:48:40.136817 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 19:48:40.168643 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation
Feb 13 19:48:40.224288 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 19:48:40.234551 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 19:48:40.352052 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 19:48:40.362624 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 19:48:40.402584 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 19:48:40.406552 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 19:48:40.412155 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 19:48:40.414300 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 19:48:40.427093 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 19:48:40.465965 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 19:48:40.536568 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb 13 19:48:40.536640 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012)
Feb 13 19:48:40.555142 kernel: ena 0000:00:05.0: ENA device version: 0.10
Feb 13 19:48:40.555428 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Feb 13 19:48:40.558370 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:2e:0a:b9:ee:f1
Feb 13 19:48:40.561662 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 19:48:40.562470 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 19:48:40.568878 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 19:48:40.570950 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 19:48:40.571204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 19:48:40.573614 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 19:48:40.585863 (udev-worker)[522]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 19:48:40.591785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 19:48:40.614999 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35
Feb 13 19:48:40.617291 kernel: nvme nvme0: pci function 0000:00:04.0
Feb 13 19:48:40.625314 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Feb 13 19:48:40.630326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 19:48:40.642267 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 19:48:40.642333 kernel: GPT:9289727 != 16777215
Feb 13 19:48:40.642089 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 19:48:40.655457 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 19:48:40.655496 kernel: GPT:9289727 != 16777215
Feb 13 19:48:40.655521 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 19:48:40.655546 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 19:48:40.696024 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 19:48:40.756346 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (515)
Feb 13 19:48:40.762430 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (544)
Feb 13 19:48:40.842610 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Feb 13 19:48:40.861450 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Feb 13 19:48:40.890121 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 19:48:40.906079 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Feb 13 19:48:40.908703 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Feb 13 19:48:40.923549 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 19:48:40.945706 disk-uuid[662]: Primary Header is updated.
Feb 13 19:48:40.945706 disk-uuid[662]: Secondary Entries is updated.
Feb 13 19:48:40.945706 disk-uuid[662]: Secondary Header is updated.
Feb 13 19:48:40.958278 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 19:48:40.968279 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 19:48:41.980056 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 19:48:41.980125 disk-uuid[663]: The operation has completed successfully.
Feb 13 19:48:42.153321 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 19:48:42.154705 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 19:48:42.213553 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 19:48:42.224118 sh[921]: Success
Feb 13 19:48:42.244284 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 19:48:42.329026 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 19:48:42.353470 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 19:48:42.359319 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 19:48:42.402588 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6
Feb 13 19:48:42.402651 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 19:48:42.402688 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 19:48:42.405507 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 19:48:42.405542 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 19:48:42.468279 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Feb 13 19:48:42.492537 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 19:48:42.496513 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 19:48:42.504557 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 19:48:42.516540 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 19:48:42.550282 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 19:48:42.550347 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 19:48:42.550384 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 19:48:42.559595 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 19:48:42.577514 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 19:48:42.581288 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 19:48:42.591725 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 19:48:42.606701 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 19:48:42.685265 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 19:48:42.698554 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 19:48:42.759771 systemd-networkd[1113]: lo: Link UP
Feb 13 19:48:42.759793 systemd-networkd[1113]: lo: Gained carrier
Feb 13 19:48:42.764785 systemd-networkd[1113]: Enumeration completed
Feb 13 19:48:42.766335 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 19:48:42.776940 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 19:48:42.776952 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 19:48:42.781809 systemd[1]: Reached target network.target - Network.
Feb 13 19:48:42.788541 systemd-networkd[1113]: eth0: Link UP
Feb 13 19:48:42.788553 systemd-networkd[1113]: eth0: Gained carrier
Feb 13 19:48:42.788571 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 19:48:42.811323 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.22.232/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 19:48:43.014949 ignition[1042]: Ignition 2.19.0
Feb 13 19:48:43.014970 ignition[1042]: Stage: fetch-offline
Feb 13 19:48:43.015518 ignition[1042]: no configs at "/usr/lib/ignition/base.d"
Feb 13 19:48:43.015543 ignition[1042]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 19:48:43.021634 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 19:48:43.016506 ignition[1042]: Ignition finished successfully
Feb 13 19:48:43.034572 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 19:48:43.063404 ignition[1122]: Ignition 2.19.0
Feb 13 19:48:43.063435 ignition[1122]: Stage: fetch
Feb 13 19:48:43.065016 ignition[1122]: no configs at "/usr/lib/ignition/base.d"
Feb 13 19:48:43.065042 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 19:48:43.065693 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 19:48:43.084448 ignition[1122]: PUT result: OK
Feb 13 19:48:43.087082 ignition[1122]: parsed url from cmdline: ""
Feb 13 19:48:43.087108 ignition[1122]: no config URL provided
Feb 13 19:48:43.087124 ignition[1122]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 19:48:43.087177 ignition[1122]: no config at "/usr/lib/ignition/user.ign"
Feb 13 19:48:43.087210 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 19:48:43.090849 ignition[1122]: PUT result: OK
Feb 13 19:48:43.092652 ignition[1122]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Feb 13 19:48:43.097868 ignition[1122]: GET result: OK
Feb 13 19:48:43.098088 ignition[1122]: parsing config with SHA512: 418c396e41b19419f7a1351ccaf2d6b1288108c0cc2bf54de4f7b6c3f33af6c0f6e92619b6cf0173c59070c609366fc705ad8bd5efea72348c194e0827222300
Feb 13 19:48:43.111446 unknown[1122]: fetched base config from "system"
Feb 13 19:48:43.111680 unknown[1122]: fetched base config from "system"
Feb 13 19:48:43.113648 ignition[1122]: fetch: fetch complete
Feb 13 19:48:43.111705 unknown[1122]: fetched user config from "aws"
Feb 13 19:48:43.113663 ignition[1122]: fetch: fetch passed
Feb 13 19:48:43.121169 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 19:48:43.113768 ignition[1122]: Ignition finished successfully
Feb 13 19:48:43.146675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 19:48:43.171441 ignition[1128]: Ignition 2.19.0
Feb 13 19:48:43.172211 ignition[1128]: Stage: kargs
Feb 13 19:48:43.172877 ignition[1128]: no configs at "/usr/lib/ignition/base.d"
Feb 13 19:48:43.172901 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 19:48:43.173049 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 19:48:43.175976 ignition[1128]: PUT result: OK
Feb 13 19:48:43.186058 ignition[1128]: kargs: kargs passed
Feb 13 19:48:43.186328 ignition[1128]: Ignition finished successfully
Feb 13 19:48:43.191299 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 19:48:43.201537 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 19:48:43.229454 ignition[1134]: Ignition 2.19.0
Feb 13 19:48:43.229482 ignition[1134]: Stage: disks
Feb 13 19:48:43.230537 ignition[1134]: no configs at "/usr/lib/ignition/base.d"
Feb 13 19:48:43.230566 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 19:48:43.230719 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 19:48:43.232453 ignition[1134]: PUT result: OK
Feb 13 19:48:43.241916 ignition[1134]: disks: disks passed
Feb 13 19:48:43.242069 ignition[1134]: Ignition finished successfully
Feb 13 19:48:43.246112 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 19:48:43.250349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 19:48:43.252540 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 19:48:43.266236 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 19:48:43.270085 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 19:48:43.271983 systemd[1]: Reached target basic.target - Basic System.
Feb 13 19:48:43.292664 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 19:48:43.336451 systemd-fsck[1142]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 19:48:43.342015 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 19:48:43.352434 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 19:48:43.446276 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none.
Feb 13 19:48:43.448582 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 19:48:43.452055 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 19:48:43.465488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 19:48:43.472435 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 19:48:43.477858 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 19:48:43.477956 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 19:48:43.478046 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 19:48:43.499281 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1161)
Feb 13 19:48:43.504680 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 19:48:43.504740 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 19:48:43.507480 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 19:48:43.509401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 19:48:43.520675 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 19:48:43.529268 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 19:48:43.532396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 19:48:43.789953 initrd-setup-root[1185]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 19:48:43.796997 initrd-setup-root[1192]: cut: /sysroot/etc/group: No such file or directory
Feb 13 19:48:43.805451 initrd-setup-root[1199]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 19:48:43.813970 initrd-setup-root[1206]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 19:48:44.018695 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 19:48:44.026463 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 19:48:44.041564 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 19:48:44.058326 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 19:48:44.058149 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 19:48:44.092053 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 19:48:44.101887 ignition[1274]: INFO     : Ignition 2.19.0
Feb 13 19:48:44.103740 ignition[1274]: INFO     : Stage: mount
Feb 13 19:48:44.105517 ignition[1274]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 19:48:44.107516 ignition[1274]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 19:48:44.107516 ignition[1274]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 19:48:44.112113 ignition[1274]: INFO     : PUT result: OK
Feb 13 19:48:44.127612 ignition[1274]: INFO     : mount: mount passed
Feb 13 19:48:44.129270 ignition[1274]: INFO     : Ignition finished successfully
Feb 13 19:48:44.133318 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 19:48:44.144422 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 19:48:44.454615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 19:48:44.487260 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1286)
Feb 13 19:48:44.491931 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 19:48:44.491987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 19:48:44.492016 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 19:48:44.497277 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 19:48:44.500469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 19:48:44.543900 ignition[1303]: INFO     : Ignition 2.19.0
Feb 13 19:48:44.543900 ignition[1303]: INFO     : Stage: files
Feb 13 19:48:44.547091 ignition[1303]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 19:48:44.547091 ignition[1303]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 19:48:44.551128 ignition[1303]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 19:48:44.553521 ignition[1303]: INFO     : PUT result: OK
Feb 13 19:48:44.558312 ignition[1303]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 19:48:44.560911 ignition[1303]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 19:48:44.560911 ignition[1303]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 19:48:44.568824 ignition[1303]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 19:48:44.571921 ignition[1303]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 19:48:44.575143 unknown[1303]: wrote ssh authorized keys file for user: core
Feb 13 19:48:44.578212 ignition[1303]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 19:48:44.582380 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 13 19:48:44.582380 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 13 19:48:44.582380 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 19:48:44.582380 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Feb 13 19:48:44.667439 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb 13 19:48:44.736399 systemd-networkd[1113]: eth0: Gained IPv6LL
Feb 13 19:48:44.857052 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 19:48:44.857052 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 19:48:44.863783 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1
Feb 13 19:48:45.367977 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Feb 13 19:48:45.820167 ignition[1303]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 19:48:45.820167 ignition[1303]: INFO     : files: op(c): [started]  processing unit "containerd.service"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: op(c): op(d): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: op(c): [finished] processing unit "containerd.service"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: op(e): [started]  processing unit "prepare-helm.service"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: op(e): op(f): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: op(e): [finished] processing unit "prepare-helm.service"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: op(10): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: op(10): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: createResultFile: createFiles: op(11): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 19:48:45.829357 ignition[1303]: INFO     : files: files passed
Feb 13 19:48:45.829357 ignition[1303]: INFO     : Ignition finished successfully
Feb 13 19:48:45.830564 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 19:48:45.853748 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 19:48:45.870784 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 19:48:45.890969 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 19:48:45.891943 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 19:48:45.913721 initrd-setup-root-after-ignition[1331]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 19:48:45.913721 initrd-setup-root-after-ignition[1331]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 19:48:45.920800 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 19:48:45.926796 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 19:48:45.930633 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 19:48:45.944665 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 19:48:45.991426 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 19:48:45.991807 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 19:48:45.999485 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 19:48:46.001418 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 19:48:46.003380 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 19:48:46.019664 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 19:48:46.048307 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 19:48:46.071660 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 19:48:46.094681 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 19:48:46.099312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 19:48:46.099904 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 19:48:46.100867 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 19:48:46.101093 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 19:48:46.102092 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 19:48:46.102433 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 19:48:46.102712 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 19:48:46.103003 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 19:48:46.103322 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 19:48:46.103608 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 19:48:46.103906 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 19:48:46.104528 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 19:48:46.104825 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 19:48:46.105111 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 19:48:46.105379 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 19:48:46.105583 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 19:48:46.106571 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 19:48:46.106907 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 19:48:46.107126 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 19:48:46.157601 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 19:48:46.162055 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 19:48:46.162438 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 19:48:46.173672 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 19:48:46.174393 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 19:48:46.180727 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 19:48:46.181108 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 19:48:46.193676 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 19:48:46.201628 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 19:48:46.204841 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 19:48:46.206896 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 19:48:46.209697 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 19:48:46.209918 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 19:48:46.231622 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 19:48:46.233449 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 19:48:46.247393 ignition[1356]: INFO     : Ignition 2.19.0
Feb 13 19:48:46.247393 ignition[1356]: INFO     : Stage: umount
Feb 13 19:48:46.247393 ignition[1356]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 19:48:46.247393 ignition[1356]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 19:48:46.247393 ignition[1356]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 19:48:46.262378 ignition[1356]: INFO     : PUT result: OK
Feb 13 19:48:46.262378 ignition[1356]: INFO     : umount: umount passed
Feb 13 19:48:46.262378 ignition[1356]: INFO     : Ignition finished successfully
Feb 13 19:48:46.261922 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 19:48:46.262163 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 19:48:46.264999 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 19:48:46.265093 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 19:48:46.265777 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 19:48:46.265856 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 19:48:46.266071 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 19:48:46.266141 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 19:48:46.268555 systemd[1]: Stopped target network.target - Network.
Feb 13 19:48:46.268751 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 19:48:46.268852 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 19:48:46.269088 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 19:48:46.269635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 19:48:46.287800 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 19:48:46.292921 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 19:48:46.297658 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 19:48:46.300562 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 19:48:46.300659 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 19:48:46.304735 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 19:48:46.304833 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 19:48:46.320546 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 19:48:46.320686 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 19:48:46.328434 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 19:48:46.328522 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 19:48:46.335857 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 19:48:46.339328 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 19:48:46.343470 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 19:48:46.344604 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 19:48:46.344773 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 19:48:46.346635 systemd-networkd[1113]: eth0: DHCPv6 lease lost
Feb 13 19:48:46.350112 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 19:48:46.350341 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 19:48:46.356119 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 19:48:46.356361 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 19:48:46.361533 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 19:48:46.361746 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 19:48:46.365829 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 19:48:46.365948 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 19:48:46.383031 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 19:48:46.392896 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 19:48:46.393015 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 19:48:46.395783 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 19:48:46.395865 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 19:48:46.402101 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 19:48:46.402186 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 19:48:46.404183 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 19:48:46.404275 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 19:48:46.406700 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 19:48:46.452893 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 19:48:46.453293 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 19:48:46.463780 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 19:48:46.465753 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 19:48:46.468961 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 19:48:46.469096 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 19:48:46.471429 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 19:48:46.471500 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 19:48:46.473662 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 19:48:46.473747 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 19:48:46.475867 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 19:48:46.475959 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 19:48:46.478113 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 19:48:46.478190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 19:48:46.503290 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 19:48:46.509528 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 19:48:46.509644 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 19:48:46.510329 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Feb 13 19:48:46.510406 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 19:48:46.510979 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 19:48:46.511051 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 19:48:46.535667 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 19:48:46.535767 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 19:48:46.554782 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 19:48:46.557137 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 19:48:46.562756 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 19:48:46.572578 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 19:48:46.594619 systemd[1]: Switching root.
Feb 13 19:48:46.638572 systemd-journald[251]: Journal stopped
Feb 13 19:48:48.493080 systemd-journald[251]: Received SIGTERM from PID 1 (systemd).
Feb 13 19:48:48.493612 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 19:48:48.493659 kernel: SELinux:  policy capability open_perms=1
Feb 13 19:48:48.493690 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 19:48:48.493726 kernel: SELinux:  policy capability always_check_network=0
Feb 13 19:48:48.493757 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 19:48:48.493787 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 19:48:48.493820 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 19:48:48.493850 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 19:48:48.493883 kernel: audit: type=1403 audit(1739476126.997:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 19:48:48.493923 systemd[1]: Successfully loaded SELinux policy in 49.950ms.
Feb 13 19:48:48.493973 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.840ms.
Feb 13 19:48:48.494009 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 19:48:48.494044 systemd[1]: Detected virtualization amazon.
Feb 13 19:48:48.494076 systemd[1]: Detected architecture arm64.
Feb 13 19:48:48.494107 systemd[1]: Detected first boot.
Feb 13 19:48:48.494143 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 19:48:48.494176 zram_generator::config[1420]: No configuration found.
Feb 13 19:48:48.494211 systemd[1]: Populated /etc with preset unit settings.
Feb 13 19:48:48.494298 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 19:48:48.494337 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Feb 13 19:48:48.494375 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 19:48:48.494409 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 19:48:48.494441 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 19:48:48.494473 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 19:48:48.494505 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 19:48:48.494545 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 19:48:48.494577 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 19:48:48.494609 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 19:48:48.494645 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 19:48:48.494678 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 19:48:48.494708 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 19:48:48.494740 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 19:48:48.494772 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 19:48:48.494804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 19:48:48.494835 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Feb 13 19:48:48.494866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 19:48:48.494899 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 19:48:48.494933 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 19:48:48.494964 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 19:48:48.494997 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 19:48:48.495028 systemd[1]: Reached target swap.target - Swaps.
Feb 13 19:48:48.495058 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 19:48:48.495089 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 19:48:48.495119 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 19:48:48.495149 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 19:48:48.495183 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 19:48:48.495215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 19:48:48.498019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 19:48:48.498074 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 19:48:48.498107 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 19:48:48.498140 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 19:48:48.498777 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 19:48:48.498810 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 19:48:48.498840 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 19:48:48.498878 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 19:48:48.498909 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 19:48:48.498941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 19:48:48.498972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 19:48:48.499001 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 19:48:48.499030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 19:48:48.499684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 19:48:48.499719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 19:48:48.499751 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 19:48:48.502132 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 19:48:48.502174 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 19:48:48.502210 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Feb 13 19:48:48.502257 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.)
Feb 13 19:48:48.502318 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 19:48:48.502349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 19:48:48.502380 kernel: fuse: init (API version 7.39)
Feb 13 19:48:48.502412 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 19:48:48.502447 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 19:48:48.502479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 19:48:48.502513 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 19:48:48.502543 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 19:48:48.502572 kernel: ACPI: bus type drm_connector registered
Feb 13 19:48:48.502601 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 19:48:48.502630 kernel: loop: module loaded
Feb 13 19:48:48.502658 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 19:48:48.502688 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 19:48:48.502722 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 19:48:48.502752 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 19:48:48.502782 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 19:48:48.502811 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 19:48:48.502843 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 19:48:48.502875 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 19:48:48.502906 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 19:48:48.502936 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 19:48:48.502965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 19:48:48.503000 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 19:48:48.503030 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 19:48:48.503059 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 19:48:48.503089 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 19:48:48.503118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 19:48:48.503154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 19:48:48.503233 systemd-journald[1517]: Collecting audit messages is disabled.
Feb 13 19:48:48.504054 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 19:48:48.504827 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 19:48:48.504864 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 19:48:48.504897 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 19:48:48.504930 systemd-journald[1517]: Journal started
Feb 13 19:48:48.504986 systemd-journald[1517]: Runtime Journal (/run/log/journal/ec2f22deea7df03c407779309475a130) is 8.0M, max 75.3M, 67.3M free.
Feb 13 19:48:48.513702 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 19:48:48.525665 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 19:48:48.533382 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 19:48:48.556275 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 19:48:48.562270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 19:48:48.573313 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 19:48:48.577294 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 19:48:48.588291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 19:48:48.613314 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 19:48:48.622309 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 19:48:48.627309 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 19:48:48.629839 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 19:48:48.632819 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 19:48:48.682818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 19:48:48.694384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 19:48:48.704669 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 19:48:48.706818 systemd-tmpfiles[1550]: ACLs are not supported, ignoring.
Feb 13 19:48:48.706848 systemd-tmpfiles[1550]: ACLs are not supported, ignoring.
Feb 13 19:48:48.727756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 19:48:48.743812 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 19:48:48.759281 systemd-journald[1517]: Time spent on flushing to /var/log/journal/ec2f22deea7df03c407779309475a130 is 47.397ms for 902 entries.
Feb 13 19:48:48.759281 systemd-journald[1517]: System Journal (/var/log/journal/ec2f22deea7df03c407779309475a130) is 8.0M, max 195.6M, 187.6M free.
Feb 13 19:48:48.814993 systemd-journald[1517]: Received client request to flush runtime journal.
Feb 13 19:48:48.825012 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 19:48:48.829933 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 19:48:48.841474 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 19:48:48.865559 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 19:48:48.887774 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 19:48:48.891393 udevadm[1587]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 13 19:48:48.920989 systemd-tmpfiles[1590]: ACLs are not supported, ignoring.
Feb 13 19:48:48.921593 systemd-tmpfiles[1590]: ACLs are not supported, ignoring.
Feb 13 19:48:48.932103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 19:48:49.618819 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 19:48:49.630604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 19:48:49.689434 systemd-udevd[1596]: Using default interface naming scheme 'v255'.
Feb 13 19:48:49.730721 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 19:48:49.745572 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 19:48:49.786535 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 19:48:49.856119 (udev-worker)[1608]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 19:48:49.885530 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0.
Feb 13 19:48:49.951478 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 19:48:50.102345 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1618)
Feb 13 19:48:50.127992 systemd-networkd[1600]: lo: Link UP
Feb 13 19:48:50.128008 systemd-networkd[1600]: lo: Gained carrier
Feb 13 19:48:50.131368 systemd-networkd[1600]: Enumeration completed
Feb 13 19:48:50.132780 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 19:48:50.136129 systemd-networkd[1600]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 19:48:50.136138 systemd-networkd[1600]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 19:48:50.140196 systemd-networkd[1600]: eth0: Link UP
Feb 13 19:48:50.140584 systemd-networkd[1600]: eth0: Gained carrier
Feb 13 19:48:50.140615 systemd-networkd[1600]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 19:48:50.146090 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 19:48:50.187651 systemd-networkd[1600]: eth0: DHCPv4 address 172.31.22.232/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 19:48:50.223796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 19:48:50.373600 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 19:48:50.390910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 19:48:50.405552 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 19:48:50.408358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 19:48:50.426300 lvm[1723]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 19:48:50.459899 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 19:48:50.463390 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 19:48:50.473761 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 19:48:50.486594 lvm[1728]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 19:48:50.521642 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 19:48:50.524254 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 19:48:50.526627 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 19:48:50.526672 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 19:48:50.528616 systemd[1]: Reached target machines.target - Containers.
Feb 13 19:48:50.532713 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 19:48:50.543534 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 19:48:50.553563 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 19:48:50.555715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 19:48:50.557810 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 19:48:50.578506 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 19:48:50.585538 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 19:48:50.591592 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 19:48:50.614132 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 19:48:50.616713 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 19:48:50.626042 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 19:48:50.640422 kernel: loop0: detected capacity change from 0 to 194096
Feb 13 19:48:50.720279 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 19:48:50.759285 kernel: loop1: detected capacity change from 0 to 114432
Feb 13 19:48:50.817299 kernel: loop2: detected capacity change from 0 to 52536
Feb 13 19:48:50.921418 kernel: loop3: detected capacity change from 0 to 114328
Feb 13 19:48:50.967279 kernel: loop4: detected capacity change from 0 to 194096
Feb 13 19:48:51.000461 kernel: loop5: detected capacity change from 0 to 114432
Feb 13 19:48:51.019362 kernel: loop6: detected capacity change from 0 to 52536
Feb 13 19:48:51.031304 kernel: loop7: detected capacity change from 0 to 114328
Feb 13 19:48:51.050576 (sd-merge)[1750]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Feb 13 19:48:51.052810 (sd-merge)[1750]: Merged extensions into '/usr'.
Feb 13 19:48:51.060505 systemd[1]: Reloading requested from client PID 1736 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 19:48:51.060696 systemd[1]: Reloading...
Feb 13 19:48:51.210127 zram_generator::config[1778]: No configuration found.
Feb 13 19:48:51.235190 ldconfig[1732]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 19:48:51.456439 systemd-networkd[1600]: eth0: Gained IPv6LL
Feb 13 19:48:51.466706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 19:48:51.608039 systemd[1]: Reloading finished in 546 ms.
Feb 13 19:48:51.634501 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 19:48:51.638891 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 19:48:51.642347 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 19:48:51.656528 systemd[1]: Starting ensure-sysext.service...
Feb 13 19:48:51.665615 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 19:48:51.682494 systemd[1]: Reloading requested from client PID 1839 ('systemctl') (unit ensure-sysext.service)...
Feb 13 19:48:51.682527 systemd[1]: Reloading...
Feb 13 19:48:51.715639 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 19:48:51.716373 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 19:48:51.718129 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 19:48:51.718891 systemd-tmpfiles[1840]: ACLs are not supported, ignoring.
Feb 13 19:48:51.719044 systemd-tmpfiles[1840]: ACLs are not supported, ignoring.
Feb 13 19:48:51.729302 systemd-tmpfiles[1840]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 19:48:51.729329 systemd-tmpfiles[1840]: Skipping /boot
Feb 13 19:48:51.750267 systemd-tmpfiles[1840]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 19:48:51.750296 systemd-tmpfiles[1840]: Skipping /boot
Feb 13 19:48:51.828293 zram_generator::config[1869]: No configuration found.
Feb 13 19:48:52.070769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 19:48:52.212066 systemd[1]: Reloading finished in 528 ms.
Feb 13 19:48:52.235819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 19:48:52.257621 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Feb 13 19:48:52.266913 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 19:48:52.275535 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 19:48:52.294732 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 19:48:52.305527 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 19:48:52.327044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 19:48:52.336473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 19:48:52.355717 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 19:48:52.379210 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 19:48:52.382443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 19:48:52.399334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 19:48:52.399749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 19:48:52.405842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 19:48:52.407215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 19:48:52.424100 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 19:48:52.435156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 19:48:52.456397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 19:48:52.459769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 19:48:52.460525 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 19:48:52.474154 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 19:48:52.479938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 19:48:52.480350 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 19:48:52.484930 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 19:48:52.485893 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 19:48:52.497678 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 19:48:52.503621 systemd[1]: Finished ensure-sysext.service.
Feb 13 19:48:52.522959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 19:48:52.525436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 19:48:52.529150 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 19:48:52.530294 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 19:48:52.536179 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 19:48:52.537476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 19:48:52.551565 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 19:48:52.565538 augenrules[1973]: No rules
Feb 13 19:48:52.570065 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Feb 13 19:48:52.598308 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 19:48:52.607763 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 19:48:52.612738 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 19:48:52.641799 systemd-resolved[1932]: Positive Trust Anchors:
Feb 13 19:48:52.641831 systemd-resolved[1932]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 19:48:52.641893 systemd-resolved[1932]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 19:48:52.650758 systemd-resolved[1932]: Defaulting to hostname 'linux'.
Feb 13 19:48:52.654295 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 19:48:52.656912 systemd[1]: Reached target network.target - Network.
Feb 13 19:48:52.658904 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 19:48:52.661615 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 19:48:52.663859 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 19:48:52.666303 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 19:48:52.668523 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 19:48:52.670947 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 19:48:52.672976 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 19:48:52.675160 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 19:48:52.677374 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 19:48:52.677435 systemd[1]: Reached target paths.target - Path Units.
Feb 13 19:48:52.679026 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 19:48:52.682540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 19:48:52.687720 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 19:48:52.692523 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 19:48:52.698130 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 19:48:52.700266 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 19:48:52.702122 systemd[1]: Reached target basic.target - Basic System.
Feb 13 19:48:52.704123 systemd[1]: System is tainted: cgroupsv1
Feb 13 19:48:52.704330 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 19:48:52.704499 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 19:48:52.708409 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 19:48:52.716478 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 19:48:52.730555 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 19:48:52.751390 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 19:48:52.774873 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 19:48:52.777654 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 19:48:52.790476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 19:48:52.800784 jq[1989]: false
Feb 13 19:48:52.806016 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 19:48:52.819743 systemd[1]: Started ntpd.service - Network Time Service.
Feb 13 19:48:52.826143 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 19:48:52.845771 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Feb 13 19:48:52.862487 systemd[1]: Starting setup-oem.service - Setup OEM...
Feb 13 19:48:52.884971 dbus-daemon[1988]: [system] SELinux support is enabled
Feb 13 19:48:52.892505 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 19:48:52.907190 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 19:48:52.924713 dbus-daemon[1988]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1600 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 13 19:48:52.943525 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 19:48:52.946928 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 19:48:52.953212 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found loop4
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found loop5
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found loop6
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found loop7
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found nvme0n1
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found nvme0n1p1
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found nvme0n1p2
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found nvme0n1p3
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found usr
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found nvme0n1p4
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found nvme0n1p6
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found nvme0n1p7
Feb 13 19:48:52.976393 extend-filesystems[1993]: Found nvme0n1p9
Feb 13 19:48:52.976393 extend-filesystems[1993]: Checking size of /dev/nvme0n1p9
Feb 13 19:48:52.969518 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 19:48:52.985413 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 19:48:53.009588 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 19:48:53.010118 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 19:48:53.026834 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 19:48:53.029067 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 19:48:53.042595 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 19:48:53.052563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 19:48:53.053047 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 19:48:53.083504 jq[2022]: true
Feb 13 19:48:53.097723 update_engine[2021]: I20250213 19:48:53.097023  2021 main.cc:92] Flatcar Update Engine starting
Feb 13 19:48:53.127375 update_engine[2021]: I20250213 19:48:53.126212  2021 update_check_scheduler.cc:74] Next update check in 9m3s
Feb 13 19:48:53.142819 extend-filesystems[1993]: Resized partition /dev/nvme0n1p9
Feb 13 19:48:53.166101 extend-filesystems[2048]: resize2fs 1.47.1 (20-May-2024)
Feb 13 19:48:53.224319 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Feb 13 19:48:53.224460 coreos-metadata[1987]: Feb 13 19:48:53.195 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 19:48:53.224460 coreos-metadata[1987]: Feb 13 19:48:53.202 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Feb 13 19:48:53.224460 coreos-metadata[1987]: Feb 13 19:48:53.205 INFO Fetch successful
Feb 13 19:48:53.224460 coreos-metadata[1987]: Feb 13 19:48:53.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Feb 13 19:48:53.224460 coreos-metadata[1987]: Feb 13 19:48:53.207 INFO Fetch successful
Feb 13 19:48:53.224460 coreos-metadata[1987]: Feb 13 19:48:53.207 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Feb 13 19:48:53.224460 coreos-metadata[1987]: Feb 13 19:48:53.209 INFO Fetch successful
Feb 13 19:48:53.224460 coreos-metadata[1987]: Feb 13 19:48:53.209 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: ----------------------------------------------------
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: ntp-4 is maintained by Network Time Foundation,
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: corporation.  Support and training for ntp-4 are
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: available at https://www.nwtime.org/support
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: ----------------------------------------------------
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: proto: precision = 0.096 usec (-23)
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: basedate set to 2025-02-01
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: gps base set to 2025-02-02 (week 2352)
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: Listen normally on 3 eth0 172.31.22.232:123
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: Listen normally on 4 lo [::1]:123
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: Listen normally on 5 eth0 [fe80::42e:aff:feb9:eef1%2]:123
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: Listening on routing socket on fd #22 for interface updates
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 19:48:53.225403 ntpd[1997]: 13 Feb 19:48:53 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 19:48:53.167471 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting
Feb 13 19:48:53.228003 (ntainerd)[2037]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 19:48:53.167518 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 19:48:53.251534 coreos-metadata[1987]: Feb 13 19:48:53.233 INFO Fetch successful
Feb 13 19:48:53.251534 coreos-metadata[1987]: Feb 13 19:48:53.233 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Feb 13 19:48:53.251534 coreos-metadata[1987]: Feb 13 19:48:53.240 INFO Fetch failed with 404: resource not found
Feb 13 19:48:53.251534 coreos-metadata[1987]: Feb 13 19:48:53.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Feb 13 19:48:53.251534 coreos-metadata[1987]: Feb 13 19:48:53.247 INFO Fetch successful
Feb 13 19:48:53.251534 coreos-metadata[1987]: Feb 13 19:48:53.247 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Feb 13 19:48:53.251890 tar[2030]: linux-arm64/helm
Feb 13 19:48:53.234587 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 19:48:53.167539 ntpd[1997]: ----------------------------------------------------
Feb 13 19:48:53.245141 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 19:48:53.167558 ntpd[1997]: ntp-4 is maintained by Network Time Foundation,
Feb 13 19:48:53.245234 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 19:48:53.167577 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 19:48:53.167596 ntpd[1997]: corporation.  Support and training for ntp-4 are
Feb 13 19:48:53.167615 ntpd[1997]: available at https://www.nwtime.org/support
Feb 13 19:48:53.167634 ntpd[1997]: ----------------------------------------------------
Feb 13 19:48:53.267479 coreos-metadata[1987]: Feb 13 19:48:53.257 INFO Fetch successful
Feb 13 19:48:53.267479 coreos-metadata[1987]: Feb 13 19:48:53.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Feb 13 19:48:53.267479 coreos-metadata[1987]: Feb 13 19:48:53.259 INFO Fetch successful
Feb 13 19:48:53.267479 coreos-metadata[1987]: Feb 13 19:48:53.259 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Feb 13 19:48:53.172216 ntpd[1997]: proto: precision = 0.096 usec (-23)
Feb 13 19:48:53.173236 ntpd[1997]: basedate set to 2025-02-01
Feb 13 19:48:53.173288 ntpd[1997]: gps base set to 2025-02-02 (week 2352)
Feb 13 19:48:53.178745 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 19:48:53.178834 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 19:48:53.179121 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 19:48:53.179190 ntpd[1997]: Listen normally on 3 eth0 172.31.22.232:123
Feb 13 19:48:53.179280 ntpd[1997]: Listen normally on 4 lo [::1]:123
Feb 13 19:48:53.179362 ntpd[1997]: Listen normally on 5 eth0 [fe80::42e:aff:feb9:eef1%2]:123
Feb 13 19:48:53.179426 ntpd[1997]: Listening on routing socket on fd #22 for interface updates
Feb 13 19:48:53.214437 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 19:48:53.272679 coreos-metadata[1987]: Feb 13 19:48:53.268 INFO Fetch successful
Feb 13 19:48:53.272679 coreos-metadata[1987]: Feb 13 19:48:53.268 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Feb 13 19:48:53.214489 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 19:48:53.272891 coreos-metadata[1987]: Feb 13 19:48:53.272 INFO Fetch successful
Feb 13 19:48:53.232532 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 13 19:48:53.273297 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Feb 13 19:48:53.281450 jq[2034]: true
Feb 13 19:48:53.275433 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 19:48:53.275482 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 19:48:53.289747 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 19:48:53.295699 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 19:48:53.333498 systemd[1]: Finished setup-oem.service - Setup OEM.
Feb 13 19:48:53.335283 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Feb 13 19:48:53.365535 extend-filesystems[2048]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Feb 13 19:48:53.365535 extend-filesystems[2048]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 19:48:53.365535 extend-filesystems[2048]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Feb 13 19:48:53.408542 extend-filesystems[1993]: Resized filesystem in /dev/nvme0n1p9
Feb 13 19:48:53.414141 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 19:48:53.414679 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 19:48:53.426582 systemd-logind[2018]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 13 19:48:53.426635 systemd-logind[2018]: Watching system buttons on /dev/input/event1 (Sleep Button)
Feb 13 19:48:53.427792 systemd-logind[2018]: New seat seat0.
Feb 13 19:48:53.491057 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Feb 13 19:48:53.493762 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 19:48:53.549045 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 19:48:53.555110 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 19:48:53.654032 bash[2107]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 19:48:53.655767 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 19:48:53.673086 systemd[1]: Starting sshkeys.service...
Feb 13 19:48:53.740500 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Feb 13 19:48:53.764094 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Feb 13 19:48:53.835030 amazon-ssm-agent[2075]: Initializing new seelog logger
Feb 13 19:48:53.836616 amazon-ssm-agent[2075]: New Seelog Logger Creation Complete
Feb 13 19:48:53.841921 amazon-ssm-agent[2075]: 2025/02/13 19:48:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 19:48:53.841921 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 19:48:53.847837 amazon-ssm-agent[2075]: 2025/02/13 19:48:53 processing appconfig overrides
Feb 13 19:48:53.847837 amazon-ssm-agent[2075]: 2025/02/13 19:48:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 19:48:53.847837 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 19:48:53.847837 amazon-ssm-agent[2075]: 2025/02/13 19:48:53 processing appconfig overrides
Feb 13 19:48:53.847837 amazon-ssm-agent[2075]: 2025/02/13 19:48:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 19:48:53.847837 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 19:48:53.847837 amazon-ssm-agent[2075]: 2025/02/13 19:48:53 processing appconfig overrides
Feb 13 19:48:53.851080 amazon-ssm-agent[2075]: 2025-02-13 19:48:53 INFO Proxy environment variables:
Feb 13 19:48:53.872018 amazon-ssm-agent[2075]: 2025/02/13 19:48:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 19:48:53.872018 amazon-ssm-agent[2075]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 19:48:53.872018 amazon-ssm-agent[2075]: 2025/02/13 19:48:53 processing appconfig overrides
Feb 13 19:48:53.895749 locksmithd[2063]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 19:48:53.956111 amazon-ssm-agent[2075]: 2025-02-13 19:48:53 INFO https_proxy:
Feb 13 19:48:53.983521 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2082)
Feb 13 19:48:54.061036 amazon-ssm-agent[2075]: 2025-02-13 19:48:53 INFO http_proxy:
Feb 13 19:48:54.112305 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 13 19:48:54.112610 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Feb 13 19:48:54.117666 dbus-daemon[1988]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2060 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 13 19:48:54.149955 systemd[1]: Starting polkit.service - Authorization Manager...
Feb 13 19:48:54.176358 amazon-ssm-agent[2075]: 2025-02-13 19:48:53 INFO no_proxy:
Feb 13 19:48:54.186182 containerd[2037]: time="2025-02-13T19:48:54.186032613Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21
Feb 13 19:48:54.231963 coreos-metadata[2122]: Feb 13 19:48:54.186 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 19:48:54.231963 coreos-metadata[2122]: Feb 13 19:48:54.190 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Feb 13 19:48:54.231963 coreos-metadata[2122]: Feb 13 19:48:54.196 INFO Fetch successful
Feb 13 19:48:54.231963 coreos-metadata[2122]: Feb 13 19:48:54.196 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Feb 13 19:48:54.231963 coreos-metadata[2122]: Feb 13 19:48:54.197 INFO Fetch successful
Feb 13 19:48:54.200632 unknown[2122]: wrote ssh authorized keys file for user: core
Feb 13 19:48:54.290389 amazon-ssm-agent[2075]: 2025-02-13 19:48:53 INFO Checking if agent identity type OnPrem can be assumed
Feb 13 19:48:54.319336 polkitd[2159]: Started polkitd version 121
Feb 13 19:48:54.381218 update-ssh-keys[2182]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 19:48:54.380657 polkitd[2159]: Loading rules from directory /etc/polkit-1/rules.d
Feb 13 19:48:54.380809 polkitd[2159]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 13 19:48:54.385614 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Feb 13 19:48:54.388559 amazon-ssm-agent[2075]: 2025-02-13 19:48:53 INFO Checking if agent identity type EC2 can be assumed
Feb 13 19:48:54.397313 polkitd[2159]: Finished loading, compiling and executing 2 rules
Feb 13 19:48:54.405571 systemd[1]: Finished sshkeys.service.
Feb 13 19:48:54.413445 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 13 19:48:54.413816 systemd[1]: Started polkit.service - Authorization Manager.
Feb 13 19:48:54.417150 polkitd[2159]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 13 19:48:54.478307 containerd[2037]: time="2025-02-13T19:48:54.477224063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 19:48:54.492748 containerd[2037]: time="2025-02-13T19:48:54.492479327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 19:48:54.492748 containerd[2037]: time="2025-02-13T19:48:54.492565223Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 19:48:54.492748 containerd[2037]: time="2025-02-13T19:48:54.492602843Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 19:48:54.493043 containerd[2037]: time="2025-02-13T19:48:54.492904007Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 19:48:54.493043 containerd[2037]: time="2025-02-13T19:48:54.492939623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 19:48:54.494492 containerd[2037]: time="2025-02-13T19:48:54.493057487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 19:48:54.494492 containerd[2037]: time="2025-02-13T19:48:54.493086167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 19:48:54.494877 systemd-hostnamed[2060]: Hostname set to <ip-172-31-22-232> (transient)
Feb 13 19:48:54.497037 containerd[2037]: time="2025-02-13T19:48:54.494713187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 19:48:54.497037 containerd[2037]: time="2025-02-13T19:48:54.494770703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 19:48:54.497037 containerd[2037]: time="2025-02-13T19:48:54.494807831Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 19:48:54.497037 containerd[2037]: time="2025-02-13T19:48:54.496506491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 19:48:54.497037 containerd[2037]: time="2025-02-13T19:48:54.496734467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 19:48:54.497037 containerd[2037]: time="2025-02-13T19:48:54.497144363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 19:48:54.494879 systemd-resolved[1932]: System hostname changed to 'ip-172-31-22-232'.
Feb 13 19:48:54.501732 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO Agent will take identity from EC2
Feb 13 19:48:54.501798 containerd[2037]: time="2025-02-13T19:48:54.500443043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 19:48:54.501798 containerd[2037]: time="2025-02-13T19:48:54.500491271Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 19:48:54.501798 containerd[2037]: time="2025-02-13T19:48:54.500698319Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 19:48:54.501798 containerd[2037]: time="2025-02-13T19:48:54.500799995Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 19:48:54.522692 containerd[2037]: time="2025-02-13T19:48:54.521943431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 19:48:54.522692 containerd[2037]: time="2025-02-13T19:48:54.522058535Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 19:48:54.522692 containerd[2037]: time="2025-02-13T19:48:54.522178367Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 19:48:54.522692 containerd[2037]: time="2025-02-13T19:48:54.522220235Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 19:48:54.522692 containerd[2037]: time="2025-02-13T19:48:54.522271151Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 19:48:54.522692 containerd[2037]: time="2025-02-13T19:48:54.522529847Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523098179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523363067Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523397951Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523428155Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523461695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523492847Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523524071Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523556207Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523589963Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523620503Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523649555Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523677791Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523718075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526257 containerd[2037]: time="2025-02-13T19:48:54.523751627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.523786103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.523818239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.523864163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.523896047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.523946375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.523978979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.524009147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.524044199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.524072627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.524102207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.524132135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.524167331Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 19:48:54.526945 containerd[2037]: time="2025-02-13T19:48:54.524208239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.524236259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.528295967Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.528409079Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.528462251Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.528491363Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.528521171Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.528552539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.528590567Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.528622271Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 19:48:54.529256 containerd[2037]: time="2025-02-13T19:48:54.528653579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 19:48:54.533772 containerd[2037]: time="2025-02-13T19:48:54.529185863Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 19:48:54.533772 containerd[2037]: time="2025-02-13T19:48:54.529789823Z" level=info msg="Connect containerd service"
Feb 13 19:48:54.533772 containerd[2037]: time="2025-02-13T19:48:54.529939451Z" level=info msg="using legacy CRI server"
Feb 13 19:48:54.533772 containerd[2037]: time="2025-02-13T19:48:54.529960139Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 19:48:54.533772 containerd[2037]: time="2025-02-13T19:48:54.530203547Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 19:48:54.563193 containerd[2037]: time="2025-02-13T19:48:54.554507399Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 19:48:54.563193 containerd[2037]: time="2025-02-13T19:48:54.559899695Z" level=info msg="Start subscribing containerd event"
Feb 13 19:48:54.563193 containerd[2037]: time="2025-02-13T19:48:54.560015267Z" level=info msg="Start recovering state"
Feb 13 19:48:54.563193 containerd[2037]: time="2025-02-13T19:48:54.562432859Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 19:48:54.564510 containerd[2037]: time="2025-02-13T19:48:54.564461711Z" level=info msg="Start event monitor"
Feb 13 19:48:54.564574 containerd[2037]: time="2025-02-13T19:48:54.564516611Z" level=info msg="Start snapshots syncer"
Feb 13 19:48:54.564574 containerd[2037]: time="2025-02-13T19:48:54.564555443Z" level=info msg="Start cni network conf syncer for default"
Feb 13 19:48:54.564666 containerd[2037]: time="2025-02-13T19:48:54.564576539Z" level=info msg="Start streaming server"
Feb 13 19:48:54.572403 containerd[2037]: time="2025-02-13T19:48:54.569930195Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 19:48:54.570276 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 19:48:54.573271 containerd[2037]: time="2025-02-13T19:48:54.572772251Z" level=info msg="containerd successfully booted in 0.394698s"
Feb 13 19:48:54.598619 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 19:48:54.698274 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 19:48:54.797047 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 19:48:54.899260 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Feb 13 19:48:55.000261 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] OS: linux, Arch: arm64
Feb 13 19:48:55.098784 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] Starting Core Agent
Feb 13 19:48:55.199280 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Feb 13 19:48:55.299049 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO [Registrar] Starting registrar module
Feb 13 19:48:55.400267 amazon-ssm-agent[2075]: 2025-02-13 19:48:54 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Feb 13 19:48:55.528034 tar[2030]: linux-arm64/LICENSE
Feb 13 19:48:55.530671 tar[2030]: linux-arm64/README.md
Feb 13 19:48:55.581422 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Feb 13 19:48:55.631745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 19:48:55.646630 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 19:48:55.920277 amazon-ssm-agent[2075]: 2025-02-13 19:48:55 INFO [EC2Identity] EC2 registration was successful.
Feb 13 19:48:55.961400 amazon-ssm-agent[2075]: 2025-02-13 19:48:55 INFO [CredentialRefresher] credentialRefresher has started
Feb 13 19:48:55.964544 amazon-ssm-agent[2075]: 2025-02-13 19:48:55 INFO [CredentialRefresher] Starting credentials refresher loop
Feb 13 19:48:55.964544 amazon-ssm-agent[2075]: 2025-02-13 19:48:55 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Feb 13 19:48:56.020753 amazon-ssm-agent[2075]: 2025-02-13 19:48:55 INFO [CredentialRefresher] Next credential rotation will be in 30.341611790866665 minutes
Feb 13 19:48:56.436252 sshd_keygen[2053]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 19:48:56.481045 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 19:48:56.492695 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 19:48:56.518744 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 19:48:56.521524 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 19:48:56.533646 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 19:48:56.567804 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 19:48:56.576911 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 19:48:56.591966 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Feb 13 19:48:56.594910 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 19:48:56.597611 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 19:48:56.600036 systemd[1]: Startup finished in 9.348s (kernel) + 9.650s (userspace) = 18.998s.
Feb 13 19:48:56.650880 kubelet[2257]: E0213 19:48:56.650823    2257 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 19:48:56.656456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 19:48:56.656874 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 19:48:56.988375 amazon-ssm-agent[2075]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Feb 13 19:48:57.089520 amazon-ssm-agent[2075]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2290) started
Feb 13 19:48:57.190042 amazon-ssm-agent[2075]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Feb 13 19:49:00.431095 systemd-resolved[1932]: Clock change detected. Flushing caches.
Feb 13 19:49:01.553449 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 19:49:01.562514 systemd[1]: Started sshd@0-172.31.22.232:22-139.178.89.65:37218.service - OpenSSH per-connection server daemon (139.178.89.65:37218).
Feb 13 19:49:01.745275 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 37218 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:49:01.748637 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:49:01.763947 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 19:49:01.769463 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 19:49:01.776321 systemd-logind[2018]: New session 1 of user core.
Feb 13 19:49:01.804451 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 19:49:01.816559 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 19:49:01.826095 (systemd)[2305]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 19:49:02.037563 systemd[2305]: Queued start job for default target default.target.
Feb 13 19:49:02.038628 systemd[2305]: Created slice app.slice - User Application Slice.
Feb 13 19:49:02.039129 systemd[2305]: Reached target paths.target - Paths.
Feb 13 19:49:02.039161 systemd[2305]: Reached target timers.target - Timers.
Feb 13 19:49:02.049199 systemd[2305]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 19:49:02.062147 systemd[2305]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 19:49:02.062264 systemd[2305]: Reached target sockets.target - Sockets.
Feb 13 19:49:02.062295 systemd[2305]: Reached target basic.target - Basic System.
Feb 13 19:49:02.062375 systemd[2305]: Reached target default.target - Main User Target.
Feb 13 19:49:02.062436 systemd[2305]: Startup finished in 224ms.
Feb 13 19:49:02.062599 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 19:49:02.080140 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 19:49:02.233787 systemd[1]: Started sshd@1-172.31.22.232:22-139.178.89.65:37222.service - OpenSSH per-connection server daemon (139.178.89.65:37222).
Feb 13 19:49:02.416082 sshd[2317]: Accepted publickey for core from 139.178.89.65 port 37222 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:49:02.418612 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:49:02.427363 systemd-logind[2018]: New session 2 of user core.
Feb 13 19:49:02.434618 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 19:49:02.562916 sshd[2317]: pam_unix(sshd:session): session closed for user core
Feb 13 19:49:02.569961 systemd[1]: sshd@1-172.31.22.232:22-139.178.89.65:37222.service: Deactivated successfully.
Feb 13 19:49:02.574992 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 19:49:02.576466 systemd-logind[2018]: Session 2 logged out. Waiting for processes to exit.
Feb 13 19:49:02.578458 systemd-logind[2018]: Removed session 2.
Feb 13 19:49:02.600483 systemd[1]: Started sshd@2-172.31.22.232:22-139.178.89.65:37238.service - OpenSSH per-connection server daemon (139.178.89.65:37238).
Feb 13 19:49:02.768705 sshd[2325]: Accepted publickey for core from 139.178.89.65 port 37238 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:49:02.771190 sshd[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:49:02.778830 systemd-logind[2018]: New session 3 of user core.
Feb 13 19:49:02.791598 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 19:49:02.912209 sshd[2325]: pam_unix(sshd:session): session closed for user core
Feb 13 19:49:02.919780 systemd[1]: sshd@2-172.31.22.232:22-139.178.89.65:37238.service: Deactivated successfully.
Feb 13 19:49:02.924602 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 19:49:02.926553 systemd-logind[2018]: Session 3 logged out. Waiting for processes to exit.
Feb 13 19:49:02.928226 systemd-logind[2018]: Removed session 3.
Feb 13 19:49:02.942499 systemd[1]: Started sshd@3-172.31.22.232:22-139.178.89.65:37248.service - OpenSSH per-connection server daemon (139.178.89.65:37248).
Feb 13 19:49:03.126761 sshd[2333]: Accepted publickey for core from 139.178.89.65 port 37248 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:49:03.129298 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:49:03.136643 systemd-logind[2018]: New session 4 of user core.
Feb 13 19:49:03.147592 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 19:49:03.276750 sshd[2333]: pam_unix(sshd:session): session closed for user core
Feb 13 19:49:03.281703 systemd[1]: sshd@3-172.31.22.232:22-139.178.89.65:37248.service: Deactivated successfully.
Feb 13 19:49:03.288469 systemd-logind[2018]: Session 4 logged out. Waiting for processes to exit.
Feb 13 19:49:03.288910 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 19:49:03.291620 systemd-logind[2018]: Removed session 4.
Feb 13 19:49:03.306493 systemd[1]: Started sshd@4-172.31.22.232:22-139.178.89.65:37256.service - OpenSSH per-connection server daemon (139.178.89.65:37256).
Feb 13 19:49:03.483694 sshd[2341]: Accepted publickey for core from 139.178.89.65 port 37256 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:49:03.486121 sshd[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:49:03.493730 systemd-logind[2018]: New session 5 of user core.
Feb 13 19:49:03.504468 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 19:49:03.620911 sudo[2345]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 19:49:03.622131 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 19:49:03.637604 sudo[2345]: pam_unix(sudo:session): session closed for user root
Feb 13 19:49:03.661438 sshd[2341]: pam_unix(sshd:session): session closed for user core
Feb 13 19:49:03.667130 systemd[1]: sshd@4-172.31.22.232:22-139.178.89.65:37256.service: Deactivated successfully.
Feb 13 19:49:03.675608 systemd-logind[2018]: Session 5 logged out. Waiting for processes to exit.
Feb 13 19:49:03.675633 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 19:49:03.678165 systemd-logind[2018]: Removed session 5.
Feb 13 19:49:03.690533 systemd[1]: Started sshd@5-172.31.22.232:22-139.178.89.65:37270.service - OpenSSH per-connection server daemon (139.178.89.65:37270).
Feb 13 19:49:03.868009 sshd[2350]: Accepted publickey for core from 139.178.89.65 port 37270 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:49:03.870597 sshd[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:49:03.879228 systemd-logind[2018]: New session 6 of user core.
Feb 13 19:49:03.886487 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 19:49:03.990745 sudo[2355]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 19:49:03.991441 sudo[2355]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 19:49:03.997474 sudo[2355]: pam_unix(sudo:session): session closed for user root
Feb 13 19:49:04.007450 sudo[2354]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Feb 13 19:49:04.008162 sudo[2354]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 19:49:04.032497 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Feb 13 19:49:04.036185 auditctl[2358]: No rules
Feb 13 19:49:04.036976 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 19:49:04.037523 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Feb 13 19:49:04.050004 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Feb 13 19:49:04.092272 augenrules[2377]: No rules
Feb 13 19:49:04.095515 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Feb 13 19:49:04.099504 sudo[2354]: pam_unix(sudo:session): session closed for user root
Feb 13 19:49:04.123357 sshd[2350]: pam_unix(sshd:session): session closed for user core
Feb 13 19:49:04.130819 systemd[1]: sshd@5-172.31.22.232:22-139.178.89.65:37270.service: Deactivated successfully.
Feb 13 19:49:04.135453 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 19:49:04.136900 systemd-logind[2018]: Session 6 logged out. Waiting for processes to exit.
Feb 13 19:49:04.138954 systemd-logind[2018]: Removed session 6.
Feb 13 19:49:04.153534 systemd[1]: Started sshd@6-172.31.22.232:22-139.178.89.65:37272.service - OpenSSH per-connection server daemon (139.178.89.65:37272).
Feb 13 19:49:04.325364 sshd[2386]: Accepted publickey for core from 139.178.89.65 port 37272 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:49:04.327889 sshd[2386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:49:04.336239 systemd-logind[2018]: New session 7 of user core.
Feb 13 19:49:04.346532 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 19:49:04.451671 sudo[2390]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 19:49:04.453076 sudo[2390]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 19:49:04.879762 systemd[1]: Starting docker.service - Docker Application Container Engine...
Feb 13 19:49:04.879936 (dockerd)[2405]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Feb 13 19:49:05.236530 dockerd[2405]: time="2025-02-13T19:49:05.234750819Z" level=info msg="Starting up"
Feb 13 19:49:05.690903 dockerd[2405]: time="2025-02-13T19:49:05.690480942Z" level=info msg="Loading containers: start."
Feb 13 19:49:05.843107 kernel: Initializing XFRM netlink socket
Feb 13 19:49:05.878286 (udev-worker)[2427]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 19:49:05.960401 systemd-networkd[1600]: docker0: Link UP
Feb 13 19:49:05.986548 dockerd[2405]: time="2025-02-13T19:49:05.986480659Z" level=info msg="Loading containers: done."
Feb 13 19:49:06.009440 dockerd[2405]: time="2025-02-13T19:49:06.009357663Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 13 19:49:06.009658 dockerd[2405]: time="2025-02-13T19:49:06.009518823Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0
Feb 13 19:49:06.009721 dockerd[2405]: time="2025-02-13T19:49:06.009699855Z" level=info msg="Daemon has completed initialization"
Feb 13 19:49:06.071067 dockerd[2405]: time="2025-02-13T19:49:06.070882192Z" level=info msg="API listen on /run/docker.sock"
Feb 13 19:49:06.072002 systemd[1]: Started docker.service - Docker Application Container Engine.
Feb 13 19:49:07.069540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 19:49:07.085364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 19:49:07.265720 containerd[2037]: time="2025-02-13T19:49:07.265665737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\""
Feb 13 19:49:07.446411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 19:49:07.456655 (kubelet)[2568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 19:49:07.542360 kubelet[2568]: E0213 19:49:07.542280    2568 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 19:49:07.550154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 19:49:07.550818 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 19:49:07.909709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552905862.mount: Deactivated successfully.
Feb 13 19:49:09.358068 containerd[2037]: time="2025-02-13T19:49:09.356585552Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207"
Feb 13 19:49:09.358068 containerd[2037]: time="2025-02-13T19:49:09.356795072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:09.360108 containerd[2037]: time="2025-02-13T19:49:09.360046880Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:09.366468 containerd[2037]: time="2025-02-13T19:49:09.366383924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:09.369160 containerd[2037]: time="2025-02-13T19:49:09.368850932Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.102794655s"
Feb 13 19:49:09.369160 containerd[2037]: time="2025-02-13T19:49:09.368910572Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\""
Feb 13 19:49:09.406178 containerd[2037]: time="2025-02-13T19:49:09.406132196Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\""
Feb 13 19:49:10.981012 containerd[2037]: time="2025-02-13T19:49:10.980933400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:10.983120 containerd[2037]: time="2025-02-13T19:49:10.983060988Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594"
Feb 13 19:49:10.984331 containerd[2037]: time="2025-02-13T19:49:10.984234324Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:10.989879 containerd[2037]: time="2025-02-13T19:49:10.989729076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:10.993580 containerd[2037]: time="2025-02-13T19:49:10.992705652Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.58634578s"
Feb 13 19:49:10.993580 containerd[2037]: time="2025-02-13T19:49:10.992765148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\""
Feb 13 19:49:11.032351 containerd[2037]: time="2025-02-13T19:49:11.032286404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\""
Feb 13 19:49:12.583353 containerd[2037]: time="2025-02-13T19:49:12.583277796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:12.586395 containerd[2037]: time="2025-02-13T19:49:12.586328628Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934"
Feb 13 19:49:12.587893 containerd[2037]: time="2025-02-13T19:49:12.587822364Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:12.593465 containerd[2037]: time="2025-02-13T19:49:12.593413776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:12.595980 containerd[2037]: time="2025-02-13T19:49:12.595792752Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.563443036s"
Feb 13 19:49:12.595980 containerd[2037]: time="2025-02-13T19:49:12.595848144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\""
Feb 13 19:49:12.634413 containerd[2037]: time="2025-02-13T19:49:12.634343436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\""
Feb 13 19:49:13.923323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110812069.mount: Deactivated successfully.
Feb 13 19:49:14.461490 containerd[2037]: time="2025-02-13T19:49:14.461427421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:14.462950 containerd[2037]: time="2025-02-13T19:49:14.462880981Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370"
Feb 13 19:49:14.465197 containerd[2037]: time="2025-02-13T19:49:14.465119617Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:14.468700 containerd[2037]: time="2025-02-13T19:49:14.468650941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:14.470418 containerd[2037]: time="2025-02-13T19:49:14.470226241Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.835816841s"
Feb 13 19:49:14.470418 containerd[2037]: time="2025-02-13T19:49:14.470277109Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\""
Feb 13 19:49:14.514077 containerd[2037]: time="2025-02-13T19:49:14.513836449Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Feb 13 19:49:15.093599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745370735.mount: Deactivated successfully.
Feb 13 19:49:16.121407 containerd[2037]: time="2025-02-13T19:49:16.121329625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:16.123618 containerd[2037]: time="2025-02-13T19:49:16.123548017Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381"
Feb 13 19:49:16.124519 containerd[2037]: time="2025-02-13T19:49:16.124420201Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:16.130684 containerd[2037]: time="2025-02-13T19:49:16.130575073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:16.133212 containerd[2037]: time="2025-02-13T19:49:16.133150045Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.619248544s"
Feb 13 19:49:16.133502 containerd[2037]: time="2025-02-13T19:49:16.133357333Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Feb 13 19:49:16.173085 containerd[2037]: time="2025-02-13T19:49:16.172205162Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb 13 19:49:16.669869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196591373.mount: Deactivated successfully.
Feb 13 19:49:16.677277 containerd[2037]: time="2025-02-13T19:49:16.677201680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:16.680183 containerd[2037]: time="2025-02-13T19:49:16.680121052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821"
Feb 13 19:49:16.681334 containerd[2037]: time="2025-02-13T19:49:16.681257488Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:16.685408 containerd[2037]: time="2025-02-13T19:49:16.685347508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:16.687456 containerd[2037]: time="2025-02-13T19:49:16.687275968Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 515.00657ms"
Feb 13 19:49:16.687456 containerd[2037]: time="2025-02-13T19:49:16.687327016Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\""
Feb 13 19:49:16.724213 containerd[2037]: time="2025-02-13T19:49:16.724142152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\""
Feb 13 19:49:17.297407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232338552.mount: Deactivated successfully.
Feb 13 19:49:17.570929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 13 19:49:17.587478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 19:49:17.940811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 19:49:17.974500 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 19:49:18.093462 kubelet[2765]: E0213 19:49:18.093350    2765 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 19:49:18.100283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 19:49:18.100653 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 19:49:20.814813 containerd[2037]: time="2025-02-13T19:49:20.814729293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:20.817145 containerd[2037]: time="2025-02-13T19:49:20.817073841Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472"
Feb 13 19:49:20.819626 containerd[2037]: time="2025-02-13T19:49:20.819547929Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:20.825917 containerd[2037]: time="2025-02-13T19:49:20.825816321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:20.828468 containerd[2037]: time="2025-02-13T19:49:20.828406173Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.104181017s"
Feb 13 19:49:20.828805 containerd[2037]: time="2025-02-13T19:49:20.828635409Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\""
Feb 13 19:49:24.763971 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 13 19:49:28.319731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Feb 13 19:49:28.331828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 19:49:28.642311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 19:49:28.658643 (kubelet)[2860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 19:49:28.741990 kubelet[2860]: E0213 19:49:28.741918    2860 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 19:49:28.748549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 19:49:28.748942 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 19:49:29.900648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 19:49:29.911518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 19:49:29.951646 systemd[1]: Reloading requested from client PID 2876 ('systemctl') (unit session-7.scope)...
Feb 13 19:49:29.951684 systemd[1]: Reloading...
Feb 13 19:49:30.201789 zram_generator::config[2922]: No configuration found.
Feb 13 19:49:30.431767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 19:49:30.589703 systemd[1]: Reloading finished in 637 ms.
Feb 13 19:49:30.687379 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Feb 13 19:49:30.688182 systemd[1]: kubelet.service: Failed with result 'signal'.
Feb 13 19:49:30.688950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 19:49:30.697722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 19:49:30.976506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 19:49:30.987682 (kubelet)[2991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 19:49:31.060599 kubelet[2991]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 19:49:31.060599 kubelet[2991]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 19:49:31.060599 kubelet[2991]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 19:49:31.063172 kubelet[2991]: I0213 19:49:31.060702    2991 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 19:49:31.758660 kubelet[2991]: I0213 19:49:31.758598    2991 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Feb 13 19:49:31.758660 kubelet[2991]: I0213 19:49:31.758646    2991 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 19:49:31.759004 kubelet[2991]: I0213 19:49:31.758964    2991 server.go:927] "Client rotation is on, will bootstrap in background"
Feb 13 19:49:31.784316 kubelet[2991]: I0213 19:49:31.783949    2991 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 19:49:31.784562 kubelet[2991]: E0213 19:49:31.784535    2991 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:31.799216 kubelet[2991]: I0213 19:49:31.799171    2991 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 19:49:31.800290 kubelet[2991]: I0213 19:49:31.800229    2991 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 19:49:31.801213 kubelet[2991]: I0213 19:49:31.800428    2991 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 19:49:31.802075 kubelet[2991]: I0213 19:49:31.801602    2991 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 19:49:31.802075 kubelet[2991]: I0213 19:49:31.801637    2991 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 19:49:31.802075 kubelet[2991]: I0213 19:49:31.801883    2991 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 19:49:31.803738 kubelet[2991]: I0213 19:49:31.803710    2991 kubelet.go:400] "Attempting to sync node with API server"
Feb 13 19:49:31.803885 kubelet[2991]: I0213 19:49:31.803863    2991 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 19:49:31.804096 kubelet[2991]: I0213 19:49:31.804075    2991 kubelet.go:312] "Adding apiserver pod source"
Feb 13 19:49:31.804272 kubelet[2991]: I0213 19:49:31.804252    2991 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 19:49:31.805753 kubelet[2991]: W0213 19:49:31.805686    2991 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.232:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:31.807083 kubelet[2991]: E0213 19:49:31.805956    2991 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.232:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:31.807083 kubelet[2991]: W0213 19:49:31.806122    2991 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-232&limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:31.807083 kubelet[2991]: E0213 19:49:31.806184    2991 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-232&limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:31.807083 kubelet[2991]: I0213 19:49:31.806328    2991 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Feb 13 19:49:31.807083 kubelet[2991]: I0213 19:49:31.806670    2991 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 19:49:31.807083 kubelet[2991]: W0213 19:49:31.806750    2991 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 19:49:31.810017 kubelet[2991]: I0213 19:49:31.809982    2991 server.go:1264] "Started kubelet"
Feb 13 19:49:31.818118 kubelet[2991]: E0213 19:49:31.817148    2991 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.232:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-232.1823dc5cb3fd01bf  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-232,UID:ip-172-31-22-232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-232,},FirstTimestamp:2025-02-13 19:49:31.809948095 +0000 UTC m=+0.816355301,LastTimestamp:2025-02-13 19:49:31.809948095 +0000 UTC m=+0.816355301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-232,}"
Feb 13 19:49:31.818118 kubelet[2991]: I0213 19:49:31.817694    2991 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 19:49:31.818576 kubelet[2991]: I0213 19:49:31.818536    2991 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 19:49:31.818649 kubelet[2991]: I0213 19:49:31.818615    2991 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 19:49:31.820007 kubelet[2991]: I0213 19:49:31.819966    2991 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 19:49:31.820284 kubelet[2991]: I0213 19:49:31.820250    2991 server.go:455] "Adding debug handlers to kubelet server"
Feb 13 19:49:31.831395 kubelet[2991]: E0213 19:49:31.831175    2991 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 19:49:31.832131 kubelet[2991]: E0213 19:49:31.831649    2991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-22-232\" not found"
Feb 13 19:49:31.832131 kubelet[2991]: I0213 19:49:31.831752    2991 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 19:49:31.832131 kubelet[2991]: I0213 19:49:31.831905    2991 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Feb 13 19:49:31.833762 kubelet[2991]: I0213 19:49:31.833733    2991 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 19:49:31.835585 kubelet[2991]: E0213 19:49:31.835531    2991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="200ms"
Feb 13 19:49:31.835905 kubelet[2991]: W0213 19:49:31.835843    2991 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:31.836107 kubelet[2991]: E0213 19:49:31.836084    2991 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:31.838806 kubelet[2991]: I0213 19:49:31.838768    2991 factory.go:221] Registration of the containerd container factory successfully
Feb 13 19:49:31.838988 kubelet[2991]: I0213 19:49:31.838969    2991 factory.go:221] Registration of the systemd container factory successfully
Feb 13 19:49:31.841093 kubelet[2991]: I0213 19:49:31.839237    2991 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 19:49:31.861709 kubelet[2991]: I0213 19:49:31.861641    2991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 19:49:31.864002 kubelet[2991]: I0213 19:49:31.863961    2991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 19:49:31.867296 kubelet[2991]: I0213 19:49:31.867264    2991 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 19:49:31.867465 kubelet[2991]: I0213 19:49:31.867447    2991 kubelet.go:2337] "Starting kubelet main sync loop"
Feb 13 19:49:31.867619 kubelet[2991]: E0213 19:49:31.867588    2991 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 19:49:31.877239 kubelet[2991]: W0213 19:49:31.877165    2991 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:31.877591 kubelet[2991]: E0213 19:49:31.877547    2991 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:31.897999 kubelet[2991]: I0213 19:49:31.897967    2991 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 19:49:31.898314 kubelet[2991]: I0213 19:49:31.898211    2991 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 19:49:31.898446 kubelet[2991]: I0213 19:49:31.898428    2991 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 19:49:31.902822 kubelet[2991]: I0213 19:49:31.902793    2991 policy_none.go:49] "None policy: Start"
Feb 13 19:49:31.904000 kubelet[2991]: I0213 19:49:31.903968    2991 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 19:49:31.904217 kubelet[2991]: I0213 19:49:31.904198    2991 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 19:49:31.914230 kubelet[2991]: I0213 19:49:31.914191    2991 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 19:49:31.914682 kubelet[2991]: I0213 19:49:31.914632    2991 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 19:49:31.914889 kubelet[2991]: I0213 19:49:31.914871    2991 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 19:49:31.924742 kubelet[2991]: E0213 19:49:31.924707    2991 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-232\" not found"
Feb 13 19:49:31.934529 kubelet[2991]: I0213 19:49:31.934465    2991 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-232"
Feb 13 19:49:31.935001 kubelet[2991]: E0213 19:49:31.934940    2991 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.232:6443/api/v1/nodes\": dial tcp 172.31.22.232:6443: connect: connection refused" node="ip-172-31-22-232"
Feb 13 19:49:31.968391 kubelet[2991]: I0213 19:49:31.968324    2991 topology_manager.go:215] "Topology Admit Handler" podUID="168ade5fe80b23b3df2f51ad0dfb9a9d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-232"
Feb 13 19:49:31.970532 kubelet[2991]: I0213 19:49:31.970465    2991 topology_manager.go:215] "Topology Admit Handler" podUID="fca8f39427e7d914de4ab8c0be91a3bd" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:31.972860 kubelet[2991]: I0213 19:49:31.972632    2991 topology_manager.go:215] "Topology Admit Handler" podUID="975e8bc11ba01868aabfd72bc3584cde" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-232"
Feb 13 19:49:32.035163 kubelet[2991]: I0213 19:49:32.034952    2991 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/168ade5fe80b23b3df2f51ad0dfb9a9d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"168ade5fe80b23b3df2f51ad0dfb9a9d\") " pod="kube-system/kube-apiserver-ip-172-31-22-232"
Feb 13 19:49:32.035163 kubelet[2991]: I0213 19:49:32.035012    2991 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:32.035163 kubelet[2991]: I0213 19:49:32.035083    2991 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:32.035163 kubelet[2991]: I0213 19:49:32.035126    2991 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/168ade5fe80b23b3df2f51ad0dfb9a9d-ca-certs\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"168ade5fe80b23b3df2f51ad0dfb9a9d\") " pod="kube-system/kube-apiserver-ip-172-31-22-232"
Feb 13 19:49:32.035443 kubelet[2991]: I0213 19:49:32.035178    2991 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/168ade5fe80b23b3df2f51ad0dfb9a9d-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"168ade5fe80b23b3df2f51ad0dfb9a9d\") " pod="kube-system/kube-apiserver-ip-172-31-22-232"
Feb 13 19:49:32.035443 kubelet[2991]: I0213 19:49:32.035221    2991 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:32.035443 kubelet[2991]: I0213 19:49:32.035259    2991 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/975e8bc11ba01868aabfd72bc3584cde-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-232\" (UID: \"975e8bc11ba01868aabfd72bc3584cde\") " pod="kube-system/kube-scheduler-ip-172-31-22-232"
Feb 13 19:49:32.035443 kubelet[2991]: I0213 19:49:32.035297    2991 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:32.035443 kubelet[2991]: I0213 19:49:32.035335    2991 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:32.037271 kubelet[2991]: E0213 19:49:32.037182    2991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="400ms"
Feb 13 19:49:32.138404 kubelet[2991]: I0213 19:49:32.138259    2991 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-232"
Feb 13 19:49:32.139671 kubelet[2991]: E0213 19:49:32.139625    2991 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.232:6443/api/v1/nodes\": dial tcp 172.31.22.232:6443: connect: connection refused" node="ip-172-31-22-232"
Feb 13 19:49:32.279414 containerd[2037]: time="2025-02-13T19:49:32.279110958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-232,Uid:168ade5fe80b23b3df2f51ad0dfb9a9d,Namespace:kube-system,Attempt:0,}"
Feb 13 19:49:32.284219 containerd[2037]: time="2025-02-13T19:49:32.283822722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-232,Uid:fca8f39427e7d914de4ab8c0be91a3bd,Namespace:kube-system,Attempt:0,}"
Feb 13 19:49:32.290665 containerd[2037]: time="2025-02-13T19:49:32.290175426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-232,Uid:975e8bc11ba01868aabfd72bc3584cde,Namespace:kube-system,Attempt:0,}"
Feb 13 19:49:32.438786 kubelet[2991]: E0213 19:49:32.438722    2991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="800ms"
Feb 13 19:49:32.542784 kubelet[2991]: I0213 19:49:32.542155    2991 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-232"
Feb 13 19:49:32.542784 kubelet[2991]: E0213 19:49:32.542634    2991 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.232:6443/api/v1/nodes\": dial tcp 172.31.22.232:6443: connect: connection refused" node="ip-172-31-22-232"
Feb 13 19:49:32.609645 kubelet[2991]: W0213 19:49:32.609556    2991 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-232&limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:32.609645 kubelet[2991]: E0213 19:49:32.609652    2991 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-232&limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:32.823410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2806978549.mount: Deactivated successfully.
Feb 13 19:49:32.826757 kubelet[2991]: W0213 19:49:32.826637    2991 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:32.826757 kubelet[2991]: E0213 19:49:32.826725    2991 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:32.835247 containerd[2037]: time="2025-02-13T19:49:32.835177868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 19:49:32.841611 containerd[2037]: time="2025-02-13T19:49:32.841498004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173"
Feb 13 19:49:32.844078 containerd[2037]: time="2025-02-13T19:49:32.843272720Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 19:49:32.845520 containerd[2037]: time="2025-02-13T19:49:32.845479245Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 19:49:32.847776 containerd[2037]: time="2025-02-13T19:49:32.847725333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 19:49:32.850141 containerd[2037]: time="2025-02-13T19:49:32.850087857Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 19:49:32.851636 containerd[2037]: time="2025-02-13T19:49:32.851598789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 19:49:32.856973 containerd[2037]: time="2025-02-13T19:49:32.856922949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 19:49:32.861184 containerd[2037]: time="2025-02-13T19:49:32.861134061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.857595ms"
Feb 13 19:49:32.864670 containerd[2037]: time="2025-02-13T19:49:32.864611337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.394479ms"
Feb 13 19:49:32.905065 containerd[2037]: time="2025-02-13T19:49:32.904671093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 620.737671ms"
Feb 13 19:49:33.053838 kubelet[2991]: W0213 19:49:33.053664    2991 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:33.053838 kubelet[2991]: E0213 19:49:33.053781    2991 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:33.069393 containerd[2037]: time="2025-02-13T19:49:33.068888310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:49:33.069601 containerd[2037]: time="2025-02-13T19:49:33.069178482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:49:33.069601 containerd[2037]: time="2025-02-13T19:49:33.069481602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:33.070363 containerd[2037]: time="2025-02-13T19:49:33.070248882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:49:33.070700 containerd[2037]: time="2025-02-13T19:49:33.070623714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:33.073695 containerd[2037]: time="2025-02-13T19:49:33.070329138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:49:33.073695 containerd[2037]: time="2025-02-13T19:49:33.073278270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:33.073695 containerd[2037]: time="2025-02-13T19:49:33.073491414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:33.075593 containerd[2037]: time="2025-02-13T19:49:33.074435358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:49:33.079046 containerd[2037]: time="2025-02-13T19:49:33.078677022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:49:33.079046 containerd[2037]: time="2025-02-13T19:49:33.078739110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:33.079987 containerd[2037]: time="2025-02-13T19:49:33.079792350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:33.229920 containerd[2037]: time="2025-02-13T19:49:33.229806006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-232,Uid:168ade5fe80b23b3df2f51ad0dfb9a9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f923092cb87d00154bf153e2e12916435b3c9879e975cb7fae701d532126a7b\""
Feb 13 19:49:33.238424 containerd[2037]: time="2025-02-13T19:49:33.237843342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-232,Uid:fca8f39427e7d914de4ab8c0be91a3bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bca79a85286abf0724a1740e67c316562ea2ac89396304b3ffa3afbab82d662\""
Feb 13 19:49:33.240406 kubelet[2991]: E0213 19:49:33.240327    2991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": dial tcp 172.31.22.232:6443: connect: connection refused" interval="1.6s"
Feb 13 19:49:33.242827 containerd[2037]: time="2025-02-13T19:49:33.242669934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-232,Uid:975e8bc11ba01868aabfd72bc3584cde,Namespace:kube-system,Attempt:0,} returns sandbox id \"71933d1aff7e62fe6899b5e934927354921c68863c5556c6f155e67ab1e4ead3\""
Feb 13 19:49:33.244942 containerd[2037]: time="2025-02-13T19:49:33.244774350Z" level=info msg="CreateContainer within sandbox \"8f923092cb87d00154bf153e2e12916435b3c9879e975cb7fae701d532126a7b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 13 19:49:33.247065 containerd[2037]: time="2025-02-13T19:49:33.246276858Z" level=info msg="CreateContainer within sandbox \"0bca79a85286abf0724a1740e67c316562ea2ac89396304b3ffa3afbab82d662\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 13 19:49:33.250651 containerd[2037]: time="2025-02-13T19:49:33.250601155Z" level=info msg="CreateContainer within sandbox \"71933d1aff7e62fe6899b5e934927354921c68863c5556c6f155e67ab1e4ead3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 13 19:49:33.301834 containerd[2037]: time="2025-02-13T19:49:33.301738843Z" level=info msg="CreateContainer within sandbox \"0bca79a85286abf0724a1740e67c316562ea2ac89396304b3ffa3afbab82d662\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d18ba869cf12fa4d0e11ecf1bd7d917f0a5bfaa013ab714cb6600c478a64ab58\""
Feb 13 19:49:33.304154 containerd[2037]: time="2025-02-13T19:49:33.303322879Z" level=info msg="StartContainer for \"d18ba869cf12fa4d0e11ecf1bd7d917f0a5bfaa013ab714cb6600c478a64ab58\""
Feb 13 19:49:33.311528 containerd[2037]: time="2025-02-13T19:49:33.311452219Z" level=info msg="CreateContainer within sandbox \"71933d1aff7e62fe6899b5e934927354921c68863c5556c6f155e67ab1e4ead3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"231fb1d66f014d92eb3768517317a085bb286ba300732ab6c0d9db7a4a3c51dc\""
Feb 13 19:49:33.314099 containerd[2037]: time="2025-02-13T19:49:33.312318715Z" level=info msg="StartContainer for \"231fb1d66f014d92eb3768517317a085bb286ba300732ab6c0d9db7a4a3c51dc\""
Feb 13 19:49:33.314283 containerd[2037]: time="2025-02-13T19:49:33.314222647Z" level=info msg="CreateContainer within sandbox \"8f923092cb87d00154bf153e2e12916435b3c9879e975cb7fae701d532126a7b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dcc6a8bd15373f2bd3229da1c3b0e4803ceab7ee5e9a68d97abb40b7eb8e5546\""
Feb 13 19:49:33.315005 containerd[2037]: time="2025-02-13T19:49:33.314936179Z" level=info msg="StartContainer for \"dcc6a8bd15373f2bd3229da1c3b0e4803ceab7ee5e9a68d97abb40b7eb8e5546\""
Feb 13 19:49:33.348874 kubelet[2991]: I0213 19:49:33.348250    2991 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-232"
Feb 13 19:49:33.348874 kubelet[2991]: E0213 19:49:33.348724    2991 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.232:6443/api/v1/nodes\": dial tcp 172.31.22.232:6443: connect: connection refused" node="ip-172-31-22-232"
Feb 13 19:49:33.358453 kubelet[2991]: W0213 19:49:33.358386    2991 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.232:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:33.358673 kubelet[2991]: E0213 19:49:33.358651    2991 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.232:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.232:6443: connect: connection refused
Feb 13 19:49:33.517067 containerd[2037]: time="2025-02-13T19:49:33.515891576Z" level=info msg="StartContainer for \"d18ba869cf12fa4d0e11ecf1bd7d917f0a5bfaa013ab714cb6600c478a64ab58\" returns successfully"
Feb 13 19:49:33.535294 containerd[2037]: time="2025-02-13T19:49:33.535231400Z" level=info msg="StartContainer for \"231fb1d66f014d92eb3768517317a085bb286ba300732ab6c0d9db7a4a3c51dc\" returns successfully"
Feb 13 19:49:33.559119 containerd[2037]: time="2025-02-13T19:49:33.559050752Z" level=info msg="StartContainer for \"dcc6a8bd15373f2bd3229da1c3b0e4803ceab7ee5e9a68d97abb40b7eb8e5546\" returns successfully"
Feb 13 19:49:34.951464 kubelet[2991]: I0213 19:49:34.951377    2991 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-232"
Feb 13 19:49:37.224439 kubelet[2991]: E0213 19:49:37.224352    2991 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-232\" not found" node="ip-172-31-22-232"
Feb 13 19:49:37.345070 kubelet[2991]: I0213 19:49:37.343317    2991 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-232"
Feb 13 19:49:37.809800 kubelet[2991]: I0213 19:49:37.809438    2991 apiserver.go:52] "Watching apiserver"
Feb 13 19:49:37.833068 kubelet[2991]: I0213 19:49:37.832943    2991 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Feb 13 19:49:38.878648 update_engine[2021]: I20250213 19:49:38.878224  2021 update_attempter.cc:509] Updating boot flags...
Feb 13 19:49:38.961180 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3283)
Feb 13 19:49:39.303171 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3286)
Feb 13 19:49:39.517876 systemd[1]: Reloading requested from client PID 3452 ('systemctl') (unit session-7.scope)...
Feb 13 19:49:39.517902 systemd[1]: Reloading...
Feb 13 19:49:39.600940 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3286)
Feb 13 19:49:39.838927 zram_generator::config[3546]: No configuration found.
Feb 13 19:49:40.238944 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 19:49:40.418341 systemd[1]: Reloading finished in 899 ms.
Feb 13 19:49:40.523589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 19:49:40.528679 kubelet[2991]: I0213 19:49:40.527395    2991 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 19:49:40.551930 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 19:49:40.554736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 19:49:40.573567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 19:49:40.891377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 19:49:40.906756 (kubelet)[3647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 19:49:41.010873 kubelet[3647]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 19:49:41.010873 kubelet[3647]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 19:49:41.010873 kubelet[3647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 19:49:41.013668 kubelet[3647]: I0213 19:49:41.010938    3647 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 19:49:41.033966 kubelet[3647]: I0213 19:49:41.033884    3647 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Feb 13 19:49:41.033966 kubelet[3647]: I0213 19:49:41.033930    3647 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 19:49:41.034754 kubelet[3647]: I0213 19:49:41.034317    3647 server.go:927] "Client rotation is on, will bootstrap in background"
Feb 13 19:49:41.038339 kubelet[3647]: I0213 19:49:41.038238    3647 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 13 19:49:41.042526 kubelet[3647]: I0213 19:49:41.041381    3647 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 19:49:41.055829 kubelet[3647]: I0213 19:49:41.055735    3647 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 19:49:41.057352 kubelet[3647]: I0213 19:49:41.056699    3647 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 19:49:41.057352 kubelet[3647]: I0213 19:49:41.056763    3647 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 19:49:41.057352 kubelet[3647]: I0213 19:49:41.057095    3647 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 19:49:41.057352 kubelet[3647]: I0213 19:49:41.057118    3647 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 19:49:41.057352 kubelet[3647]: I0213 19:49:41.057171    3647 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 19:49:41.059349 kubelet[3647]: I0213 19:49:41.057348    3647 kubelet.go:400] "Attempting to sync node with API server"
Feb 13 19:49:41.059349 kubelet[3647]: I0213 19:49:41.057371    3647 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 19:49:41.059349 kubelet[3647]: I0213 19:49:41.057421    3647 kubelet.go:312] "Adding apiserver pod source"
Feb 13 19:49:41.059349 kubelet[3647]: I0213 19:49:41.057457    3647 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 19:49:41.067975 kubelet[3647]: I0213 19:49:41.065267    3647 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Feb 13 19:49:41.067975 kubelet[3647]: I0213 19:49:41.065543    3647 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 19:49:41.072529 kubelet[3647]: I0213 19:49:41.070751    3647 server.go:1264] "Started kubelet"
Feb 13 19:49:41.072529 kubelet[3647]: I0213 19:49:41.071793    3647 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 19:49:41.072529 kubelet[3647]: I0213 19:49:41.072227    3647 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 19:49:41.072529 kubelet[3647]: I0213 19:49:41.072286    3647 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 19:49:41.079674 kubelet[3647]: I0213 19:49:41.079630    3647 server.go:455] "Adding debug handlers to kubelet server"
Feb 13 19:49:41.099045 kubelet[3647]: I0213 19:49:41.096949    3647 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 19:49:41.109902 kubelet[3647]: I0213 19:49:41.109858    3647 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 19:49:41.112812 kubelet[3647]: I0213 19:49:41.112774    3647 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Feb 13 19:49:41.113321 kubelet[3647]: I0213 19:49:41.113297    3647 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 19:49:41.145304 kubelet[3647]: I0213 19:49:41.145188    3647 factory.go:221] Registration of the systemd container factory successfully
Feb 13 19:49:41.145677 kubelet[3647]: I0213 19:49:41.145558    3647 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 19:49:41.152497 kubelet[3647]: E0213 19:49:41.152459    3647 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 19:49:41.153657 kubelet[3647]: I0213 19:49:41.153619    3647 factory.go:221] Registration of the containerd container factory successfully
Feb 13 19:49:41.169651 kubelet[3647]: I0213 19:49:41.169440    3647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 19:49:41.173004 kubelet[3647]: I0213 19:49:41.172662    3647 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 19:49:41.173004 kubelet[3647]: I0213 19:49:41.172729    3647 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 19:49:41.173004 kubelet[3647]: I0213 19:49:41.172764    3647 kubelet.go:2337] "Starting kubelet main sync loop"
Feb 13 19:49:41.173004 kubelet[3647]: E0213 19:49:41.172831    3647 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 19:49:41.236049 kubelet[3647]: I0213 19:49:41.235381    3647 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-232"
Feb 13 19:49:41.251018 kubelet[3647]: I0213 19:49:41.250633    3647 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-22-232"
Feb 13 19:49:41.251018 kubelet[3647]: I0213 19:49:41.250860    3647 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-232"
Feb 13 19:49:41.275587 kubelet[3647]: E0213 19:49:41.275535    3647 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 13 19:49:41.344395 kubelet[3647]: I0213 19:49:41.344355    3647 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 19:49:41.344395 kubelet[3647]: I0213 19:49:41.344386    3647 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 19:49:41.344641 kubelet[3647]: I0213 19:49:41.344422    3647 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 19:49:41.344696 kubelet[3647]: I0213 19:49:41.344671    3647 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 13 19:49:41.344747 kubelet[3647]: I0213 19:49:41.344692    3647 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 13 19:49:41.344747 kubelet[3647]: I0213 19:49:41.344726    3647 policy_none.go:49] "None policy: Start"
Feb 13 19:49:41.346401 kubelet[3647]: I0213 19:49:41.345950    3647 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 19:49:41.346401 kubelet[3647]: I0213 19:49:41.345998    3647 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 19:49:41.346401 kubelet[3647]: I0213 19:49:41.346364    3647 state_mem.go:75] "Updated machine memory state"
Feb 13 19:49:41.349043 kubelet[3647]: I0213 19:49:41.348994    3647 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 19:49:41.349341 kubelet[3647]: I0213 19:49:41.349276    3647 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 19:49:41.354054 kubelet[3647]: I0213 19:49:41.351628    3647 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 19:49:41.478926 kubelet[3647]: I0213 19:49:41.475720    3647 topology_manager.go:215] "Topology Admit Handler" podUID="168ade5fe80b23b3df2f51ad0dfb9a9d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-232"
Feb 13 19:49:41.478926 kubelet[3647]: I0213 19:49:41.475929    3647 topology_manager.go:215] "Topology Admit Handler" podUID="fca8f39427e7d914de4ab8c0be91a3bd" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:41.478926 kubelet[3647]: I0213 19:49:41.476007    3647 topology_manager.go:215] "Topology Admit Handler" podUID="975e8bc11ba01868aabfd72bc3584cde" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-232"
Feb 13 19:49:41.493421 kubelet[3647]: E0213 19:49:41.493351    3647 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-232\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-232"
Feb 13 19:49:41.494081 kubelet[3647]: E0213 19:49:41.493951    3647 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-22-232\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-232"
Feb 13 19:49:41.521358 kubelet[3647]: I0213 19:49:41.521242    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/168ade5fe80b23b3df2f51ad0dfb9a9d-ca-certs\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"168ade5fe80b23b3df2f51ad0dfb9a9d\") " pod="kube-system/kube-apiserver-ip-172-31-22-232"
Feb 13 19:49:41.521358 kubelet[3647]: I0213 19:49:41.521317    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/168ade5fe80b23b3df2f51ad0dfb9a9d-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"168ade5fe80b23b3df2f51ad0dfb9a9d\") " pod="kube-system/kube-apiserver-ip-172-31-22-232"
Feb 13 19:49:41.521843 kubelet[3647]: I0213 19:49:41.521620    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/168ade5fe80b23b3df2f51ad0dfb9a9d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-232\" (UID: \"168ade5fe80b23b3df2f51ad0dfb9a9d\") " pod="kube-system/kube-apiserver-ip-172-31-22-232"
Feb 13 19:49:41.521843 kubelet[3647]: I0213 19:49:41.521762    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:41.521843 kubelet[3647]: I0213 19:49:41.521805    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:41.522408 kubelet[3647]: I0213 19:49:41.522096    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:41.522408 kubelet[3647]: I0213 19:49:41.522240    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/975e8bc11ba01868aabfd72bc3584cde-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-232\" (UID: \"975e8bc11ba01868aabfd72bc3584cde\") " pod="kube-system/kube-scheduler-ip-172-31-22-232"
Feb 13 19:49:41.522408 kubelet[3647]: I0213 19:49:41.522281    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:41.522408 kubelet[3647]: I0213 19:49:41.522361    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fca8f39427e7d914de4ab8c0be91a3bd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-232\" (UID: \"fca8f39427e7d914de4ab8c0be91a3bd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-232"
Feb 13 19:49:42.067107 kubelet[3647]: I0213 19:49:42.066727    3647 apiserver.go:52] "Watching apiserver"
Feb 13 19:49:42.115869 kubelet[3647]: I0213 19:49:42.114065    3647 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Feb 13 19:49:42.452049 kubelet[3647]: I0213 19:49:42.451605    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-232" podStartSLOduration=4.451553368 podStartE2EDuration="4.451553368s" podCreationTimestamp="2025-02-13 19:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:42.402559204 +0000 UTC m=+1.484767208" watchObservedRunningTime="2025-02-13 19:49:42.451553368 +0000 UTC m=+1.533761336"
Feb 13 19:49:42.488469 kubelet[3647]: I0213 19:49:42.488374    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-232" podStartSLOduration=3.488349124 podStartE2EDuration="3.488349124s" podCreationTimestamp="2025-02-13 19:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:42.454930528 +0000 UTC m=+1.537138496" watchObservedRunningTime="2025-02-13 19:49:42.488349124 +0000 UTC m=+1.570557080"
Feb 13 19:49:42.532397 kubelet[3647]: I0213 19:49:42.532298    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-232" podStartSLOduration=1.532278917 podStartE2EDuration="1.532278917s" podCreationTimestamp="2025-02-13 19:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:42.490063924 +0000 UTC m=+1.572271928" watchObservedRunningTime="2025-02-13 19:49:42.532278917 +0000 UTC m=+1.614486873"
Feb 13 19:49:48.189117 sudo[2390]: pam_unix(sudo:session): session closed for user root
Feb 13 19:49:48.213378 sshd[2386]: pam_unix(sshd:session): session closed for user core
Feb 13 19:49:48.218629 systemd-logind[2018]: Session 7 logged out. Waiting for processes to exit.
Feb 13 19:49:48.219314 systemd[1]: sshd@6-172.31.22.232:22-139.178.89.65:37272.service: Deactivated successfully.
Feb 13 19:49:48.228009 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 19:49:48.231131 systemd-logind[2018]: Removed session 7.
Feb 13 19:49:54.491629 kubelet[3647]: I0213 19:49:54.491575    3647 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 13 19:49:54.497243 kubelet[3647]: I0213 19:49:54.493796    3647 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 13 19:49:54.497353 containerd[2037]: time="2025-02-13T19:49:54.493444120Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 19:49:54.687358 kubelet[3647]: I0213 19:49:54.680710    3647 topology_manager.go:215] "Topology Admit Handler" podUID="0c418eea-47e0-4e43-b687-b63f56350e1c" podNamespace="kube-system" podName="kube-proxy-tmf6v"
Feb 13 19:49:54.719316 kubelet[3647]: I0213 19:49:54.719266    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c418eea-47e0-4e43-b687-b63f56350e1c-kube-proxy\") pod \"kube-proxy-tmf6v\" (UID: \"0c418eea-47e0-4e43-b687-b63f56350e1c\") " pod="kube-system/kube-proxy-tmf6v"
Feb 13 19:49:54.727062 kubelet[3647]: I0213 19:49:54.726830    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c418eea-47e0-4e43-b687-b63f56350e1c-xtables-lock\") pod \"kube-proxy-tmf6v\" (UID: \"0c418eea-47e0-4e43-b687-b63f56350e1c\") " pod="kube-system/kube-proxy-tmf6v"
Feb 13 19:49:54.731060 kubelet[3647]: I0213 19:49:54.729391    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c418eea-47e0-4e43-b687-b63f56350e1c-lib-modules\") pod \"kube-proxy-tmf6v\" (UID: \"0c418eea-47e0-4e43-b687-b63f56350e1c\") " pod="kube-system/kube-proxy-tmf6v"
Feb 13 19:49:54.731060 kubelet[3647]: I0213 19:49:54.729477    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4q8w\" (UniqueName: \"kubernetes.io/projected/0c418eea-47e0-4e43-b687-b63f56350e1c-kube-api-access-k4q8w\") pod \"kube-proxy-tmf6v\" (UID: \"0c418eea-47e0-4e43-b687-b63f56350e1c\") " pod="kube-system/kube-proxy-tmf6v"
Feb 13 19:49:54.929468 kubelet[3647]: I0213 19:49:54.927478    3647 topology_manager.go:215] "Topology Admit Handler" podUID="6a5b795a-af3c-4757-8bb2-7801bb3aa061" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-n4784"
Feb 13 19:49:54.946488 kubelet[3647]: W0213 19:49:54.946444    3647 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-22-232" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-22-232' and this object
Feb 13 19:49:54.946708 kubelet[3647]: E0213 19:49:54.946687    3647 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-22-232" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-22-232' and this object
Feb 13 19:49:54.946937 kubelet[3647]: W0213 19:49:54.946807    3647 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-22-232" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-22-232' and this object
Feb 13 19:49:54.946937 kubelet[3647]: E0213 19:49:54.946844    3647 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-22-232" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-22-232' and this object
Feb 13 19:49:55.000807 containerd[2037]: time="2025-02-13T19:49:54.999419743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmf6v,Uid:0c418eea-47e0-4e43-b687-b63f56350e1c,Namespace:kube-system,Attempt:0,}"
Feb 13 19:49:55.037679 kubelet[3647]: I0213 19:49:55.036260    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49qnj\" (UniqueName: \"kubernetes.io/projected/6a5b795a-af3c-4757-8bb2-7801bb3aa061-kube-api-access-49qnj\") pod \"tigera-operator-7bc55997bb-n4784\" (UID: \"6a5b795a-af3c-4757-8bb2-7801bb3aa061\") " pod="tigera-operator/tigera-operator-7bc55997bb-n4784"
Feb 13 19:49:55.041075 kubelet[3647]: I0213 19:49:55.038370    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6a5b795a-af3c-4757-8bb2-7801bb3aa061-var-lib-calico\") pod \"tigera-operator-7bc55997bb-n4784\" (UID: \"6a5b795a-af3c-4757-8bb2-7801bb3aa061\") " pod="tigera-operator/tigera-operator-7bc55997bb-n4784"
Feb 13 19:49:55.085073 containerd[2037]: time="2025-02-13T19:49:55.084395763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:49:55.085073 containerd[2037]: time="2025-02-13T19:49:55.084479079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:49:55.085073 containerd[2037]: time="2025-02-13T19:49:55.084504399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:55.085073 containerd[2037]: time="2025-02-13T19:49:55.084664995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:55.157998 containerd[2037]: time="2025-02-13T19:49:55.157849995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmf6v,Uid:0c418eea-47e0-4e43-b687-b63f56350e1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"89cae087deabe04363f870177a557a46cfabefb186e14945a318b8a1894eba30\""
Feb 13 19:49:55.164542 containerd[2037]: time="2025-02-13T19:49:55.164443071Z" level=info msg="CreateContainer within sandbox \"89cae087deabe04363f870177a557a46cfabefb186e14945a318b8a1894eba30\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 19:49:55.200164 containerd[2037]: time="2025-02-13T19:49:55.199331716Z" level=info msg="CreateContainer within sandbox \"89cae087deabe04363f870177a557a46cfabefb186e14945a318b8a1894eba30\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c267df02e85a4c03d673bc12d2c276060ecff1ba76abc444a56ac951c2a84f3\""
Feb 13 19:49:55.202737 containerd[2037]: time="2025-02-13T19:49:55.202456564Z" level=info msg="StartContainer for \"0c267df02e85a4c03d673bc12d2c276060ecff1ba76abc444a56ac951c2a84f3\""
Feb 13 19:49:55.303454 containerd[2037]: time="2025-02-13T19:49:55.303379552Z" level=info msg="StartContainer for \"0c267df02e85a4c03d673bc12d2c276060ecff1ba76abc444a56ac951c2a84f3\" returns successfully"
Feb 13 19:49:55.337310 kubelet[3647]: I0213 19:49:55.336672    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tmf6v" podStartSLOduration=1.33664984 podStartE2EDuration="1.33664984s" podCreationTimestamp="2025-02-13 19:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:55.336441292 +0000 UTC m=+14.418649260" watchObservedRunningTime="2025-02-13 19:49:55.33664984 +0000 UTC m=+14.418857808"
Feb 13 19:49:56.154133 kubelet[3647]: E0213 19:49:56.153309    3647 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
Feb 13 19:49:56.154133 kubelet[3647]: E0213 19:49:56.153351    3647 projected.go:200] Error preparing data for projected volume kube-api-access-49qnj for pod tigera-operator/tigera-operator-7bc55997bb-n4784: failed to sync configmap cache: timed out waiting for the condition
Feb 13 19:49:56.154133 kubelet[3647]: E0213 19:49:56.153435    3647 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a5b795a-af3c-4757-8bb2-7801bb3aa061-kube-api-access-49qnj podName:6a5b795a-af3c-4757-8bb2-7801bb3aa061 nodeName:}" failed. No retries permitted until 2025-02-13 19:49:56.653405808 +0000 UTC m=+15.735613764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-49qnj" (UniqueName: "kubernetes.io/projected/6a5b795a-af3c-4757-8bb2-7801bb3aa061-kube-api-access-49qnj") pod "tigera-operator-7bc55997bb-n4784" (UID: "6a5b795a-af3c-4757-8bb2-7801bb3aa061") : failed to sync configmap cache: timed out waiting for the condition
Feb 13 19:49:57.043753 containerd[2037]: time="2025-02-13T19:49:57.043620725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-n4784,Uid:6a5b795a-af3c-4757-8bb2-7801bb3aa061,Namespace:tigera-operator,Attempt:0,}"
Feb 13 19:49:57.100008 containerd[2037]: time="2025-02-13T19:49:57.099591545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:49:57.100008 containerd[2037]: time="2025-02-13T19:49:57.099706601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:49:57.100008 containerd[2037]: time="2025-02-13T19:49:57.099756233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:57.100008 containerd[2037]: time="2025-02-13T19:49:57.099936953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:49:57.191518 containerd[2037]: time="2025-02-13T19:49:57.191469833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-n4784,Uid:6a5b795a-af3c-4757-8bb2-7801bb3aa061,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0f9b5ecb82286145ccd4e68bce9b3ce983f54b35980242f0a916a6b201b316eb\""
Feb 13 19:49:57.196212 containerd[2037]: time="2025-02-13T19:49:57.195888569Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\""
Feb 13 19:49:58.747983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749566025.mount: Deactivated successfully.
Feb 13 19:49:59.375343 containerd[2037]: time="2025-02-13T19:49:59.375247640Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:59.377249 containerd[2037]: time="2025-02-13T19:49:59.377180948Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160"
Feb 13 19:49:59.380741 containerd[2037]: time="2025-02-13T19:49:59.380392064Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:59.385351 containerd[2037]: time="2025-02-13T19:49:59.385268060Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:49:59.387108 containerd[2037]: time="2025-02-13T19:49:59.386894636Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.190941447s"
Feb 13 19:49:59.387108 containerd[2037]: time="2025-02-13T19:49:59.386948324Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\""
Feb 13 19:49:59.391948 containerd[2037]: time="2025-02-13T19:49:59.391508300Z" level=info msg="CreateContainer within sandbox \"0f9b5ecb82286145ccd4e68bce9b3ce983f54b35980242f0a916a6b201b316eb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Feb 13 19:49:59.417844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048201912.mount: Deactivated successfully.
Feb 13 19:49:59.422528 containerd[2037]: time="2025-02-13T19:49:59.421697145Z" level=info msg="CreateContainer within sandbox \"0f9b5ecb82286145ccd4e68bce9b3ce983f54b35980242f0a916a6b201b316eb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4d50425a7b66bf972ff2ad7cf171e81b82d99e542b18284ef80b1af1bdd9c00b\""
Feb 13 19:49:59.426680 containerd[2037]: time="2025-02-13T19:49:59.426612561Z" level=info msg="StartContainer for \"4d50425a7b66bf972ff2ad7cf171e81b82d99e542b18284ef80b1af1bdd9c00b\""
Feb 13 19:49:59.513889 containerd[2037]: time="2025-02-13T19:49:59.513765753Z" level=info msg="StartContainer for \"4d50425a7b66bf972ff2ad7cf171e81b82d99e542b18284ef80b1af1bdd9c00b\" returns successfully"
Feb 13 19:50:01.193859 kubelet[3647]: I0213 19:50:01.193011    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-n4784" podStartSLOduration=4.99812383 podStartE2EDuration="7.192988653s" podCreationTimestamp="2025-02-13 19:49:54 +0000 UTC" firstStartedPulling="2025-02-13 19:49:57.193599929 +0000 UTC m=+16.275807873" lastFinishedPulling="2025-02-13 19:49:59.38846474 +0000 UTC m=+18.470672696" observedRunningTime="2025-02-13 19:50:00.354963861 +0000 UTC m=+19.437171841" watchObservedRunningTime="2025-02-13 19:50:01.192988653 +0000 UTC m=+20.275196621"
Feb 13 19:50:03.499214 kubelet[3647]: I0213 19:50:03.499147    3647 topology_manager.go:215] "Topology Admit Handler" podUID="a04d2874-f6af-402f-a940-53cd25f55557" podNamespace="calico-system" podName="calico-typha-6745685987-6xrld"
Feb 13 19:50:03.588970 kubelet[3647]: I0213 19:50:03.588680    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a04d2874-f6af-402f-a940-53cd25f55557-tigera-ca-bundle\") pod \"calico-typha-6745685987-6xrld\" (UID: \"a04d2874-f6af-402f-a940-53cd25f55557\") " pod="calico-system/calico-typha-6745685987-6xrld"
Feb 13 19:50:03.588970 kubelet[3647]: I0213 19:50:03.588749    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a04d2874-f6af-402f-a940-53cd25f55557-typha-certs\") pod \"calico-typha-6745685987-6xrld\" (UID: \"a04d2874-f6af-402f-a940-53cd25f55557\") " pod="calico-system/calico-typha-6745685987-6xrld"
Feb 13 19:50:03.588970 kubelet[3647]: I0213 19:50:03.588792    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlmf\" (UniqueName: \"kubernetes.io/projected/a04d2874-f6af-402f-a940-53cd25f55557-kube-api-access-qhlmf\") pod \"calico-typha-6745685987-6xrld\" (UID: \"a04d2874-f6af-402f-a940-53cd25f55557\") " pod="calico-system/calico-typha-6745685987-6xrld"
Feb 13 19:50:03.687071 kubelet[3647]: I0213 19:50:03.683347    3647 topology_manager.go:215] "Topology Admit Handler" podUID="3841c69d-5adf-4ee1-9145-b14c30379f5e" podNamespace="calico-system" podName="calico-node-2n657"
Feb 13 19:50:03.790331 kubelet[3647]: I0213 19:50:03.790187    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3841c69d-5adf-4ee1-9145-b14c30379f5e-tigera-ca-bundle\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.792275 kubelet[3647]: I0213 19:50:03.791901    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3841c69d-5adf-4ee1-9145-b14c30379f5e-node-certs\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.793876 kubelet[3647]: I0213 19:50:03.793753    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3841c69d-5adf-4ee1-9145-b14c30379f5e-cni-bin-dir\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.794692 kubelet[3647]: I0213 19:50:03.794403    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v4ws\" (UniqueName: \"kubernetes.io/projected/3841c69d-5adf-4ee1-9145-b14c30379f5e-kube-api-access-2v4ws\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.795719 kubelet[3647]: I0213 19:50:03.795549    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3841c69d-5adf-4ee1-9145-b14c30379f5e-var-lib-calico\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.797422 kubelet[3647]: I0213 19:50:03.797304    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3841c69d-5adf-4ee1-9145-b14c30379f5e-policysync\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.797637 kubelet[3647]: I0213 19:50:03.797516    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3841c69d-5adf-4ee1-9145-b14c30379f5e-lib-modules\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.800252 kubelet[3647]: I0213 19:50:03.797983    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3841c69d-5adf-4ee1-9145-b14c30379f5e-var-run-calico\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.800850 kubelet[3647]: I0213 19:50:03.800491    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3841c69d-5adf-4ee1-9145-b14c30379f5e-cni-log-dir\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.800850 kubelet[3647]: I0213 19:50:03.800602    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3841c69d-5adf-4ee1-9145-b14c30379f5e-flexvol-driver-host\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.800850 kubelet[3647]: I0213 19:50:03.800709    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3841c69d-5adf-4ee1-9145-b14c30379f5e-xtables-lock\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.800850 kubelet[3647]: I0213 19:50:03.800749    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3841c69d-5adf-4ee1-9145-b14c30379f5e-cni-net-dir\") pod \"calico-node-2n657\" (UID: \"3841c69d-5adf-4ee1-9145-b14c30379f5e\") " pod="calico-system/calico-node-2n657"
Feb 13 19:50:03.828632 containerd[2037]: time="2025-02-13T19:50:03.828565982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6745685987-6xrld,Uid:a04d2874-f6af-402f-a940-53cd25f55557,Namespace:calico-system,Attempt:0,}"
Feb 13 19:50:03.920139 kubelet[3647]: E0213 19:50:03.918436    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:03.920139 kubelet[3647]: W0213 19:50:03.918473    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:03.920139 kubelet[3647]: E0213 19:50:03.918512    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:03.942055 containerd[2037]: time="2025-02-13T19:50:03.939650175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:50:03.944365 containerd[2037]: time="2025-02-13T19:50:03.942454215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:50:03.951376 kubelet[3647]: E0213 19:50:03.949751    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:03.951376 kubelet[3647]: W0213 19:50:03.949907    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:03.951804 kubelet[3647]: E0213 19:50:03.949943    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:03.956057 kubelet[3647]: E0213 19:50:03.954461    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:03.956057 kubelet[3647]: W0213 19:50:03.954498    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:03.956057 kubelet[3647]: E0213 19:50:03.954533    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:03.959565 containerd[2037]: time="2025-02-13T19:50:03.952686015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:03.959565 containerd[2037]: time="2025-02-13T19:50:03.955354647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:03.969348 kubelet[3647]: E0213 19:50:03.969144    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:03.969348 kubelet[3647]: W0213 19:50:03.969180    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:03.969348 kubelet[3647]: E0213 19:50:03.969249    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.000748 kubelet[3647]: I0213 19:50:03.999421    3647 topology_manager.go:215] "Topology Admit Handler" podUID="a53a8aa8-6a9b-4643-89f4-26162e962c9a" podNamespace="calico-system" podName="csi-node-driver-k6np6"
Feb 13 19:50:04.007088 kubelet[3647]: E0213 19:50:04.005291    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6np6" podUID="a53a8aa8-6a9b-4643-89f4-26162e962c9a"
Feb 13 19:50:04.042147 containerd[2037]: time="2025-02-13T19:50:04.036983627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2n657,Uid:3841c69d-5adf-4ee1-9145-b14c30379f5e,Namespace:calico-system,Attempt:0,}"
Feb 13 19:50:04.069364 kubelet[3647]: E0213 19:50:04.069320    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.069881 kubelet[3647]: W0213 19:50:04.069545    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.070258 kubelet[3647]: E0213 19:50:04.070193    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.070852 kubelet[3647]: E0213 19:50:04.070823    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.071140 kubelet[3647]: W0213 19:50:04.070981    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.071140 kubelet[3647]: E0213 19:50:04.071042    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.071783 kubelet[3647]: E0213 19:50:04.071583    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.071783 kubelet[3647]: W0213 19:50:04.071610    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.071783 kubelet[3647]: E0213 19:50:04.071639    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.072488 kubelet[3647]: E0213 19:50:04.072270    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.072488 kubelet[3647]: W0213 19:50:04.072297    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.072488 kubelet[3647]: E0213 19:50:04.072330    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.073059 kubelet[3647]: E0213 19:50:04.072921    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.073059 kubelet[3647]: W0213 19:50:04.072947    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.073059 kubelet[3647]: E0213 19:50:04.072973    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.073682 kubelet[3647]: E0213 19:50:04.073655    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.074047 kubelet[3647]: W0213 19:50:04.073814    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.074047 kubelet[3647]: E0213 19:50:04.073855    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.075321 kubelet[3647]: E0213 19:50:04.075287    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.075940 kubelet[3647]: W0213 19:50:04.075485    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.075940 kubelet[3647]: E0213 19:50:04.075525    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.077968 kubelet[3647]: E0213 19:50:04.077690    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.077968 kubelet[3647]: W0213 19:50:04.077725    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.077968 kubelet[3647]: E0213 19:50:04.077758    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.080945 kubelet[3647]: E0213 19:50:04.080177    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.080945 kubelet[3647]: W0213 19:50:04.080210    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.080945 kubelet[3647]: E0213 19:50:04.080243    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.084292 kubelet[3647]: E0213 19:50:04.082794    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.084292 kubelet[3647]: W0213 19:50:04.082829    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.084292 kubelet[3647]: E0213 19:50:04.084079    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.086065 kubelet[3647]: E0213 19:50:04.085795    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.086065 kubelet[3647]: W0213 19:50:04.085830    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.086065 kubelet[3647]: E0213 19:50:04.085864    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.087909 kubelet[3647]: E0213 19:50:04.087693    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.088261 kubelet[3647]: W0213 19:50:04.087857    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.088261 kubelet[3647]: E0213 19:50:04.088305    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.089150 kubelet[3647]: E0213 19:50:04.089015    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.089150 kubelet[3647]: W0213 19:50:04.089084    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.089524 kubelet[3647]: E0213 19:50:04.089118    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.090180 kubelet[3647]: E0213 19:50:04.089927    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.090180 kubelet[3647]: W0213 19:50:04.089968    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.090180 kubelet[3647]: E0213 19:50:04.089999    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.090898 kubelet[3647]: E0213 19:50:04.090815    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.090898 kubelet[3647]: W0213 19:50:04.090843    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.091141 kubelet[3647]: E0213 19:50:04.090985    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.091821 kubelet[3647]: E0213 19:50:04.091720    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.091821 kubelet[3647]: W0213 19:50:04.091749    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.092397 kubelet[3647]: E0213 19:50:04.091885    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.094581 kubelet[3647]: E0213 19:50:04.092832    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.094581 kubelet[3647]: W0213 19:50:04.094331    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.094581 kubelet[3647]: E0213 19:50:04.094387    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.096249 kubelet[3647]: E0213 19:50:04.095728    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.096249 kubelet[3647]: W0213 19:50:04.095761    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.096249 kubelet[3647]: E0213 19:50:04.095806    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.099067 kubelet[3647]: E0213 19:50:04.097817    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.099067 kubelet[3647]: W0213 19:50:04.097852    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.099067 kubelet[3647]: E0213 19:50:04.097892    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.101785 kubelet[3647]: E0213 19:50:04.099885    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.101785 kubelet[3647]: W0213 19:50:04.100123    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.101785 kubelet[3647]: E0213 19:50:04.100160    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.105294 kubelet[3647]: E0213 19:50:04.105257    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.105494 kubelet[3647]: W0213 19:50:04.105467    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.105616 kubelet[3647]: E0213 19:50:04.105593    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.105916 kubelet[3647]: I0213 19:50:04.105747    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7jqk\" (UniqueName: \"kubernetes.io/projected/a53a8aa8-6a9b-4643-89f4-26162e962c9a-kube-api-access-j7jqk\") pod \"csi-node-driver-k6np6\" (UID: \"a53a8aa8-6a9b-4643-89f4-26162e962c9a\") " pod="calico-system/csi-node-driver-k6np6"
Feb 13 19:50:04.106448 kubelet[3647]: E0213 19:50:04.106360    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.106448 kubelet[3647]: W0213 19:50:04.106390    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.106837 kubelet[3647]: E0213 19:50:04.106638    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.107290 kubelet[3647]: E0213 19:50:04.107120    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.107290 kubelet[3647]: W0213 19:50:04.107142    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.107290 kubelet[3647]: E0213 19:50:04.107165    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.107710 kubelet[3647]: I0213 19:50:04.107552    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a53a8aa8-6a9b-4643-89f4-26162e962c9a-varrun\") pod \"csi-node-driver-k6np6\" (UID: \"a53a8aa8-6a9b-4643-89f4-26162e962c9a\") " pod="calico-system/csi-node-driver-k6np6"
Feb 13 19:50:04.107871 kubelet[3647]: E0213 19:50:04.107853    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.109189 kubelet[3647]: W0213 19:50:04.107954    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.109189 kubelet[3647]: E0213 19:50:04.107981    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.110519 kubelet[3647]: E0213 19:50:04.109875    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.110519 kubelet[3647]: W0213 19:50:04.109906    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.110519 kubelet[3647]: E0213 19:50:04.109953    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.112291 kubelet[3647]: E0213 19:50:04.111702    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.112291 kubelet[3647]: W0213 19:50:04.111736    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.112291 kubelet[3647]: E0213 19:50:04.112142    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.114698 kubelet[3647]: E0213 19:50:04.113883    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.114698 kubelet[3647]: W0213 19:50:04.113918    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.114698 kubelet[3647]: E0213 19:50:04.113950    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.114698 kubelet[3647]: I0213 19:50:04.114005    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a53a8aa8-6a9b-4643-89f4-26162e962c9a-registration-dir\") pod \"csi-node-driver-k6np6\" (UID: \"a53a8aa8-6a9b-4643-89f4-26162e962c9a\") " pod="calico-system/csi-node-driver-k6np6"
Feb 13 19:50:04.120230 kubelet[3647]: E0213 19:50:04.120188    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.120539 kubelet[3647]: W0213 19:50:04.120409    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.120852 kubelet[3647]: E0213 19:50:04.120737    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.121576 kubelet[3647]: I0213 19:50:04.121249    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a53a8aa8-6a9b-4643-89f4-26162e962c9a-socket-dir\") pod \"csi-node-driver-k6np6\" (UID: \"a53a8aa8-6a9b-4643-89f4-26162e962c9a\") " pod="calico-system/csi-node-driver-k6np6"
Feb 13 19:50:04.124667 kubelet[3647]: E0213 19:50:04.123781    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.124667 kubelet[3647]: W0213 19:50:04.123813    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.124667 kubelet[3647]: E0213 19:50:04.123847    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.127769 kubelet[3647]: E0213 19:50:04.126330    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.127769 kubelet[3647]: W0213 19:50:04.126382    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.128302 kubelet[3647]: E0213 19:50:04.128073    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.129667 kubelet[3647]: E0213 19:50:04.129005    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.129667 kubelet[3647]: W0213 19:50:04.129054    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.132049 kubelet[3647]: E0213 19:50:04.131683    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.132049 kubelet[3647]: I0213 19:50:04.131755    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a53a8aa8-6a9b-4643-89f4-26162e962c9a-kubelet-dir\") pod \"csi-node-driver-k6np6\" (UID: \"a53a8aa8-6a9b-4643-89f4-26162e962c9a\") " pod="calico-system/csi-node-driver-k6np6"
Feb 13 19:50:04.135603 kubelet[3647]: E0213 19:50:04.132808    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.135603 kubelet[3647]: W0213 19:50:04.132841    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.135603 kubelet[3647]: E0213 19:50:04.132888    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.137051 kubelet[3647]: E0213 19:50:04.136534    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.137051 kubelet[3647]: W0213 19:50:04.136572    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.137506 kubelet[3647]: E0213 19:50:04.137227    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.139837 kubelet[3647]: E0213 19:50:04.139483    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.140299 kubelet[3647]: W0213 19:50:04.140164    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.141234 kubelet[3647]: E0213 19:50:04.141151    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.144143 kubelet[3647]: E0213 19:50:04.143493    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.144143 kubelet[3647]: W0213 19:50:04.143932    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.144143 kubelet[3647]: E0213 19:50:04.144006    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.193055 containerd[2037]: time="2025-02-13T19:50:04.191535120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:50:04.193055 containerd[2037]: time="2025-02-13T19:50:04.191657148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:50:04.193055 containerd[2037]: time="2025-02-13T19:50:04.191684508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:04.193055 containerd[2037]: time="2025-02-13T19:50:04.191847036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:04.234477 kubelet[3647]: E0213 19:50:04.234441    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.235154 kubelet[3647]: W0213 19:50:04.234863    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.235470 kubelet[3647]: E0213 19:50:04.235419    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.238424 kubelet[3647]: E0213 19:50:04.238341    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.238818 kubelet[3647]: W0213 19:50:04.238530    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.239355 kubelet[3647]: E0213 19:50:04.239225    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.241966 kubelet[3647]: E0213 19:50:04.241901    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.241966 kubelet[3647]: W0213 19:50:04.241942    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.241966 kubelet[3647]: E0213 19:50:04.241990    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.247222 kubelet[3647]: E0213 19:50:04.247172    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.247222 kubelet[3647]: W0213 19:50:04.247212    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.247558 kubelet[3647]: E0213 19:50:04.247363    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.251187 kubelet[3647]: E0213 19:50:04.251136    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.251187 kubelet[3647]: W0213 19:50:04.251176    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.252895 kubelet[3647]: E0213 19:50:04.251298    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.253322 kubelet[3647]: E0213 19:50:04.253273    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.253322 kubelet[3647]: W0213 19:50:04.253315    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.254443 kubelet[3647]: E0213 19:50:04.254168    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.254443 kubelet[3647]: E0213 19:50:04.254416    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.254443 kubelet[3647]: W0213 19:50:04.254437    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.255157 kubelet[3647]: E0213 19:50:04.254591    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.255157 kubelet[3647]: E0213 19:50:04.254768    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.255157 kubelet[3647]: W0213 19:50:04.254784    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.258936 kubelet[3647]: E0213 19:50:04.258176    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.258936 kubelet[3647]: E0213 19:50:04.258243    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.258936 kubelet[3647]: W0213 19:50:04.258261    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.258936 kubelet[3647]: E0213 19:50:04.258599    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.258936 kubelet[3647]: E0213 19:50:04.258630    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.258936 kubelet[3647]: W0213 19:50:04.258648    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.258936 kubelet[3647]: E0213 19:50:04.258779    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.258936 kubelet[3647]: E0213 19:50:04.258979    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.258936 kubelet[3647]: W0213 19:50:04.258996    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.258936 kubelet[3647]: E0213 19:50:04.259045    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.260875 kubelet[3647]: E0213 19:50:04.259369    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.260875 kubelet[3647]: W0213 19:50:04.259389    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.260875 kubelet[3647]: E0213 19:50:04.259422    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.264763 kubelet[3647]: E0213 19:50:04.264217    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.264763 kubelet[3647]: W0213 19:50:04.264244    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.264763 kubelet[3647]: E0213 19:50:04.264293    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.267345 kubelet[3647]: E0213 19:50:04.267283    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.267345 kubelet[3647]: W0213 19:50:04.267325    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.267778 kubelet[3647]: E0213 19:50:04.267602    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.270077 kubelet[3647]: E0213 19:50:04.269198    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.270077 kubelet[3647]: W0213 19:50:04.269243    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.270365 kubelet[3647]: E0213 19:50:04.270291    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.272263 kubelet[3647]: E0213 19:50:04.272199    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.272263 kubelet[3647]: W0213 19:50:04.272234    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.274182 kubelet[3647]: E0213 19:50:04.274131    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.274364 kubelet[3647]: W0213 19:50:04.274169    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.276732 kubelet[3647]: E0213 19:50:04.275602    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.276732 kubelet[3647]: E0213 19:50:04.275692    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.276732 kubelet[3647]: E0213 19:50:04.275778    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.276732 kubelet[3647]: W0213 19:50:04.275795    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.280300 kubelet[3647]: E0213 19:50:04.279333    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.280300 kubelet[3647]: W0213 19:50:04.279370    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.280300 kubelet[3647]: E0213 19:50:04.279770    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.281604 kubelet[3647]: W0213 19:50:04.279789    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.282474 kubelet[3647]: E0213 19:50:04.282411    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.282474 kubelet[3647]: W0213 19:50:04.282448    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.282666 kubelet[3647]: E0213 19:50:04.282485    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.282950 kubelet[3647]: E0213 19:50:04.282906    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.282950 kubelet[3647]: W0213 19:50:04.282935    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.283105 kubelet[3647]: E0213 19:50:04.282964    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.286197 kubelet[3647]: E0213 19:50:04.285855    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.286197 kubelet[3647]: W0213 19:50:04.285896    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.286197 kubelet[3647]: E0213 19:50:04.285931    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.286197 kubelet[3647]: E0213 19:50:04.285980    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.286529 kubelet[3647]: E0213 19:50:04.286444    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.286529 kubelet[3647]: W0213 19:50:04.286465    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.286529 kubelet[3647]: E0213 19:50:04.286490    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.286529 kubelet[3647]: E0213 19:50:04.286525    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.289793 kubelet[3647]: E0213 19:50:04.287543    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.289793 kubelet[3647]: W0213 19:50:04.287578    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.289793 kubelet[3647]: E0213 19:50:04.287612    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.289793 kubelet[3647]: E0213 19:50:04.289650    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.343173 kubelet[3647]: E0213 19:50:04.343109    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:04.343887 kubelet[3647]: W0213 19:50:04.343846    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:04.344157 kubelet[3647]: E0213 19:50:04.344052    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:04.363241 containerd[2037]: time="2025-02-13T19:50:04.362978761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6745685987-6xrld,Uid:a04d2874-f6af-402f-a940-53cd25f55557,Namespace:calico-system,Attempt:0,} returns sandbox id \"05585a278349f05115faf2012993905eca7d18a58bbc7d08dea96aacd19fb2af\""
Feb 13 19:50:04.366386 containerd[2037]: time="2025-02-13T19:50:04.366307309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2n657,Uid:3841c69d-5adf-4ee1-9145-b14c30379f5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"9050a38fe183d54b641db5b2bc78631890d876169e2b08e231beb7dec28625a0\""
Feb 13 19:50:04.371809 containerd[2037]: time="2025-02-13T19:50:04.371762221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\""
Feb 13 19:50:05.810921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825906421.mount: Deactivated successfully.
Feb 13 19:50:06.175305 kubelet[3647]: E0213 19:50:06.174591    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6np6" podUID="a53a8aa8-6a9b-4643-89f4-26162e962c9a"
Feb 13 19:50:06.549415 containerd[2037]: time="2025-02-13T19:50:06.548368948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:06.552142 containerd[2037]: time="2025-02-13T19:50:06.551755156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308"
Feb 13 19:50:06.554742 containerd[2037]: time="2025-02-13T19:50:06.554654500Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:06.561256 containerd[2037]: time="2025-02-13T19:50:06.561159700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:06.564054 containerd[2037]: time="2025-02-13T19:50:06.563791072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.191559483s"
Feb 13 19:50:06.564054 containerd[2037]: time="2025-02-13T19:50:06.563864188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\""
Feb 13 19:50:06.569159 containerd[2037]: time="2025-02-13T19:50:06.566922904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Feb 13 19:50:06.596391 containerd[2037]: time="2025-02-13T19:50:06.596319556Z" level=info msg="CreateContainer within sandbox \"05585a278349f05115faf2012993905eca7d18a58bbc7d08dea96aacd19fb2af\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Feb 13 19:50:06.642960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1476750541.mount: Deactivated successfully.
Feb 13 19:50:06.646460 containerd[2037]: time="2025-02-13T19:50:06.646235584Z" level=info msg="CreateContainer within sandbox \"05585a278349f05115faf2012993905eca7d18a58bbc7d08dea96aacd19fb2af\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a77301ff080243db17c40adf4901cf139fbb6231b9ddba52165019a17c8a5017\""
Feb 13 19:50:06.653240 containerd[2037]: time="2025-02-13T19:50:06.652092412Z" level=info msg="StartContainer for \"a77301ff080243db17c40adf4901cf139fbb6231b9ddba52165019a17c8a5017\""
Feb 13 19:50:06.793304 containerd[2037]: time="2025-02-13T19:50:06.793212689Z" level=info msg="StartContainer for \"a77301ff080243db17c40adf4901cf139fbb6231b9ddba52165019a17c8a5017\" returns successfully"
Feb 13 19:50:07.429329 kubelet[3647]: E0213 19:50:07.429272    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.429329 kubelet[3647]: W0213 19:50:07.429308    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.430356 kubelet[3647]: E0213 19:50:07.429340    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.430356 kubelet[3647]: E0213 19:50:07.429671    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.430356 kubelet[3647]: W0213 19:50:07.429689    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.430356 kubelet[3647]: E0213 19:50:07.429712    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.430356 kubelet[3647]: E0213 19:50:07.430050    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.430356 kubelet[3647]: W0213 19:50:07.430072    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.430356 kubelet[3647]: E0213 19:50:07.430094    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.431099 kubelet[3647]: E0213 19:50:07.430397    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.431099 kubelet[3647]: W0213 19:50:07.430415    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.431099 kubelet[3647]: E0213 19:50:07.430435    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.431099 kubelet[3647]: E0213 19:50:07.430719    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.431099 kubelet[3647]: W0213 19:50:07.430736    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.431099 kubelet[3647]: E0213 19:50:07.430758    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.431896 kubelet[3647]: E0213 19:50:07.431179    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.431896 kubelet[3647]: W0213 19:50:07.431198    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.431896 kubelet[3647]: E0213 19:50:07.431222    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.431896 kubelet[3647]: E0213 19:50:07.431535    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.431896 kubelet[3647]: W0213 19:50:07.431551    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.431896 kubelet[3647]: E0213 19:50:07.431572    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.431896 kubelet[3647]: E0213 19:50:07.431892    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.431896 kubelet[3647]: W0213 19:50:07.431909    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.432677 kubelet[3647]: E0213 19:50:07.431930    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.432677 kubelet[3647]: E0213 19:50:07.432307    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.432677 kubelet[3647]: W0213 19:50:07.432324    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.432677 kubelet[3647]: E0213 19:50:07.432344    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.432677 kubelet[3647]: E0213 19:50:07.432633    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.432677 kubelet[3647]: W0213 19:50:07.432648    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.432677 kubelet[3647]: E0213 19:50:07.432667    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.433514 kubelet[3647]: E0213 19:50:07.432941    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.433514 kubelet[3647]: W0213 19:50:07.432957    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.433514 kubelet[3647]: E0213 19:50:07.432978    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.433514 kubelet[3647]: E0213 19:50:07.433347    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.433514 kubelet[3647]: W0213 19:50:07.433366    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.433514 kubelet[3647]: E0213 19:50:07.433387    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.434109 kubelet[3647]: E0213 19:50:07.434072    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.434109 kubelet[3647]: W0213 19:50:07.434099    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.434297 kubelet[3647]: E0213 19:50:07.434123    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.434516 kubelet[3647]: E0213 19:50:07.434484    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.434516 kubelet[3647]: W0213 19:50:07.434511    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.434668 kubelet[3647]: E0213 19:50:07.434535    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.434979 kubelet[3647]: E0213 19:50:07.434953    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.435119 kubelet[3647]: W0213 19:50:07.434978    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.435119 kubelet[3647]: E0213 19:50:07.435000    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.483000 kubelet[3647]: E0213 19:50:07.482921    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.483000 kubelet[3647]: W0213 19:50:07.482956    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.483554 kubelet[3647]: E0213 19:50:07.483281    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.483964 kubelet[3647]: E0213 19:50:07.483901    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.483964 kubelet[3647]: W0213 19:50:07.483924    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.484291 kubelet[3647]: E0213 19:50:07.484133    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.484534 kubelet[3647]: E0213 19:50:07.484342    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.484534 kubelet[3647]: W0213 19:50:07.484363    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.484534 kubelet[3647]: E0213 19:50:07.484396    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.485044 kubelet[3647]: E0213 19:50:07.484894    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.485044 kubelet[3647]: W0213 19:50:07.484917    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.485044 kubelet[3647]: E0213 19:50:07.484949    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.485342 kubelet[3647]: E0213 19:50:07.485296    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.485342 kubelet[3647]: W0213 19:50:07.485324    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.485342 kubelet[3647]: E0213 19:50:07.485359    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.485851 kubelet[3647]: E0213 19:50:07.485683    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.485851 kubelet[3647]: W0213 19:50:07.485700    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.485851 kubelet[3647]: E0213 19:50:07.485731    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.486071 kubelet[3647]: E0213 19:50:07.485994    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.486071 kubelet[3647]: W0213 19:50:07.486039    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.486385 kubelet[3647]: E0213 19:50:07.486114    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.486385 kubelet[3647]: E0213 19:50:07.486373    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.486643 kubelet[3647]: W0213 19:50:07.486390    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.486643 kubelet[3647]: E0213 19:50:07.486448    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.486983 kubelet[3647]: E0213 19:50:07.486950    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.486983 kubelet[3647]: W0213 19:50:07.486976    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.487334 kubelet[3647]: E0213 19:50:07.487064    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.487522 kubelet[3647]: E0213 19:50:07.487334    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.487522 kubelet[3647]: W0213 19:50:07.487352    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.487522 kubelet[3647]: E0213 19:50:07.487385    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.488134 kubelet[3647]: E0213 19:50:07.487995    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.488134 kubelet[3647]: W0213 19:50:07.488054    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.488134 kubelet[3647]: E0213 19:50:07.488096    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.488850 kubelet[3647]: E0213 19:50:07.488665    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.488850 kubelet[3647]: W0213 19:50:07.488686    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.488850 kubelet[3647]: E0213 19:50:07.488719    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.489336 kubelet[3647]: E0213 19:50:07.489185    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.489336 kubelet[3647]: W0213 19:50:07.489231    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.489336 kubelet[3647]: E0213 19:50:07.489276    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.489960 kubelet[3647]: E0213 19:50:07.489841    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.489960 kubelet[3647]: W0213 19:50:07.489861    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.489960 kubelet[3647]: E0213 19:50:07.489904    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.490477 kubelet[3647]: E0213 19:50:07.490350    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.490477 kubelet[3647]: W0213 19:50:07.490394    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.490477 kubelet[3647]: E0213 19:50:07.490438    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.491209 kubelet[3647]: E0213 19:50:07.490947    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.491209 kubelet[3647]: W0213 19:50:07.490969    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.491209 kubelet[3647]: E0213 19:50:07.491007    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.492214 kubelet[3647]: E0213 19:50:07.492104    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.492214 kubelet[3647]: W0213 19:50:07.492136    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.492214 kubelet[3647]: E0213 19:50:07.492183    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:07.492618 kubelet[3647]: E0213 19:50:07.492590    3647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 19:50:07.492693 kubelet[3647]: W0213 19:50:07.492617    3647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 19:50:07.492693 kubelet[3647]: E0213 19:50:07.492644    3647 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 19:50:08.029628 containerd[2037]: time="2025-02-13T19:50:08.029549883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:08.031967 containerd[2037]: time="2025-02-13T19:50:08.031862067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811"
Feb 13 19:50:08.034074 containerd[2037]: time="2025-02-13T19:50:08.033946143Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:08.040176 containerd[2037]: time="2025-02-13T19:50:08.040125975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:08.045056 containerd[2037]: time="2025-02-13T19:50:08.044955663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.477961467s"
Feb 13 19:50:08.045187 containerd[2037]: time="2025-02-13T19:50:08.045066759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\""
Feb 13 19:50:08.052750 containerd[2037]: time="2025-02-13T19:50:08.052571907Z" level=info msg="CreateContainer within sandbox \"9050a38fe183d54b641db5b2bc78631890d876169e2b08e231beb7dec28625a0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Feb 13 19:50:08.088436 containerd[2037]: time="2025-02-13T19:50:08.088380280Z" level=info msg="CreateContainer within sandbox \"9050a38fe183d54b641db5b2bc78631890d876169e2b08e231beb7dec28625a0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7823f9f46a8b743ee3bf4c5fa68bca1b2a350fe5404bbae27703cf32d8b6be19\""
Feb 13 19:50:08.090141 containerd[2037]: time="2025-02-13T19:50:08.089347744Z" level=info msg="StartContainer for \"7823f9f46a8b743ee3bf4c5fa68bca1b2a350fe5404bbae27703cf32d8b6be19\""
Feb 13 19:50:08.173611 kubelet[3647]: E0213 19:50:08.173364    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6np6" podUID="a53a8aa8-6a9b-4643-89f4-26162e962c9a"
Feb 13 19:50:08.233675 containerd[2037]: time="2025-02-13T19:50:08.233603692Z" level=info msg="StartContainer for \"7823f9f46a8b743ee3bf4c5fa68bca1b2a350fe5404bbae27703cf32d8b6be19\" returns successfully"
Feb 13 19:50:08.302222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7823f9f46a8b743ee3bf4c5fa68bca1b2a350fe5404bbae27703cf32d8b6be19-rootfs.mount: Deactivated successfully.
Feb 13 19:50:08.387614 kubelet[3647]: I0213 19:50:08.387582    3647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 19:50:08.418377 kubelet[3647]: I0213 19:50:08.416143    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6745685987-6xrld" podStartSLOduration=3.21917525 podStartE2EDuration="5.416118665s" podCreationTimestamp="2025-02-13 19:50:03 +0000 UTC" firstStartedPulling="2025-02-13 19:50:04.369411469 +0000 UTC m=+23.451619425" lastFinishedPulling="2025-02-13 19:50:06.566354872 +0000 UTC m=+25.648562840" observedRunningTime="2025-02-13 19:50:07.402455776 +0000 UTC m=+26.484663756" watchObservedRunningTime="2025-02-13 19:50:08.416118665 +0000 UTC m=+27.498326681"
Feb 13 19:50:08.534747 containerd[2037]: time="2025-02-13T19:50:08.534667782Z" level=info msg="shim disconnected" id=7823f9f46a8b743ee3bf4c5fa68bca1b2a350fe5404bbae27703cf32d8b6be19 namespace=k8s.io
Feb 13 19:50:08.534747 containerd[2037]: time="2025-02-13T19:50:08.534742362Z" level=warning msg="cleaning up after shim disconnected" id=7823f9f46a8b743ee3bf4c5fa68bca1b2a350fe5404bbae27703cf32d8b6be19 namespace=k8s.io
Feb 13 19:50:08.535403 containerd[2037]: time="2025-02-13T19:50:08.534768402Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:50:09.399581 containerd[2037]: time="2025-02-13T19:50:09.399154830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Feb 13 19:50:10.173184 kubelet[3647]: E0213 19:50:10.173090    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6np6" podUID="a53a8aa8-6a9b-4643-89f4-26162e962c9a"
Feb 13 19:50:12.173849 kubelet[3647]: E0213 19:50:12.173604    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6np6" podUID="a53a8aa8-6a9b-4643-89f4-26162e962c9a"
Feb 13 19:50:13.369749 containerd[2037]: time="2025-02-13T19:50:13.369686962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:13.371308 containerd[2037]: time="2025-02-13T19:50:13.371238706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123"
Feb 13 19:50:13.372733 containerd[2037]: time="2025-02-13T19:50:13.372658510Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:13.376981 containerd[2037]: time="2025-02-13T19:50:13.376883566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:13.378772 containerd[2037]: time="2025-02-13T19:50:13.378599542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.979381568s"
Feb 13 19:50:13.378772 containerd[2037]: time="2025-02-13T19:50:13.378652522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\""
Feb 13 19:50:13.383868 containerd[2037]: time="2025-02-13T19:50:13.383811382Z" level=info msg="CreateContainer within sandbox \"9050a38fe183d54b641db5b2bc78631890d876169e2b08e231beb7dec28625a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 13 19:50:13.405215 containerd[2037]: time="2025-02-13T19:50:13.405146602Z" level=info msg="CreateContainer within sandbox \"9050a38fe183d54b641db5b2bc78631890d876169e2b08e231beb7dec28625a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a576a26bccbe9ea8fddf53d235428d0587a1633c32a29521eaf1c871f4945e77\""
Feb 13 19:50:13.409593 containerd[2037]: time="2025-02-13T19:50:13.408946774Z" level=info msg="StartContainer for \"a576a26bccbe9ea8fddf53d235428d0587a1633c32a29521eaf1c871f4945e77\""
Feb 13 19:50:13.416450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923311318.mount: Deactivated successfully.
Feb 13 19:50:13.474396 systemd[1]: run-containerd-runc-k8s.io-a576a26bccbe9ea8fddf53d235428d0587a1633c32a29521eaf1c871f4945e77-runc.dRULou.mount: Deactivated successfully.
Feb 13 19:50:13.539217 containerd[2037]: time="2025-02-13T19:50:13.539120075Z" level=info msg="StartContainer for \"a576a26bccbe9ea8fddf53d235428d0587a1633c32a29521eaf1c871f4945e77\" returns successfully"
Feb 13 19:50:14.173528 kubelet[3647]: E0213 19:50:14.173424    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6np6" podUID="a53a8aa8-6a9b-4643-89f4-26162e962c9a"
Feb 13 19:50:14.391309 containerd[2037]: time="2025-02-13T19:50:14.391206323Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 19:50:14.442379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a576a26bccbe9ea8fddf53d235428d0587a1633c32a29521eaf1c871f4945e77-rootfs.mount: Deactivated successfully.
Feb 13 19:50:14.479069 kubelet[3647]: I0213 19:50:14.476517    3647 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Feb 13 19:50:14.533236 kubelet[3647]: I0213 19:50:14.532598    3647 topology_manager.go:215] "Topology Admit Handler" podUID="244383ed-be9f-4e55-9206-721fd35d9360" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vbqfz"
Feb 13 19:50:14.565659 kubelet[3647]: I0213 19:50:14.564181    3647 topology_manager.go:215] "Topology Admit Handler" podUID="8a64d6ad-b799-442a-9fba-40d222a33c18" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hjblt"
Feb 13 19:50:14.581017 kubelet[3647]: I0213 19:50:14.580955    3647 topology_manager.go:215] "Topology Admit Handler" podUID="d94b6c05-9941-4d00-9fa6-b2a1c452394b" podNamespace="calico-apiserver" podName="calico-apiserver-899bfd54-c6kfl"
Feb 13 19:50:14.590913 kubelet[3647]: I0213 19:50:14.590818    3647 topology_manager.go:215] "Topology Admit Handler" podUID="91cd868f-3147-489a-9154-5e881b7a25ed" podNamespace="calico-system" podName="calico-kube-controllers-78f4c55485-sms9j"
Feb 13 19:50:14.593214 kubelet[3647]: I0213 19:50:14.593145    3647 topology_manager.go:215] "Topology Admit Handler" podUID="2b60d0c4-0403-4702-b85f-ae7526b9b83b" podNamespace="calico-apiserver" podName="calico-apiserver-899bfd54-5wp2p"
Feb 13 19:50:14.642226 kubelet[3647]: I0213 19:50:14.641663    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zk2r\" (UniqueName: \"kubernetes.io/projected/244383ed-be9f-4e55-9206-721fd35d9360-kube-api-access-9zk2r\") pod \"coredns-7db6d8ff4d-vbqfz\" (UID: \"244383ed-be9f-4e55-9206-721fd35d9360\") " pod="kube-system/coredns-7db6d8ff4d-vbqfz"
Feb 13 19:50:14.642226 kubelet[3647]: I0213 19:50:14.641745    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a64d6ad-b799-442a-9fba-40d222a33c18-config-volume\") pod \"coredns-7db6d8ff4d-hjblt\" (UID: \"8a64d6ad-b799-442a-9fba-40d222a33c18\") " pod="kube-system/coredns-7db6d8ff4d-hjblt"
Feb 13 19:50:14.642226 kubelet[3647]: I0213 19:50:14.641788    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b60d0c4-0403-4702-b85f-ae7526b9b83b-calico-apiserver-certs\") pod \"calico-apiserver-899bfd54-5wp2p\" (UID: \"2b60d0c4-0403-4702-b85f-ae7526b9b83b\") " pod="calico-apiserver/calico-apiserver-899bfd54-5wp2p"
Feb 13 19:50:14.642226 kubelet[3647]: I0213 19:50:14.641833    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84557\" (UniqueName: \"kubernetes.io/projected/d94b6c05-9941-4d00-9fa6-b2a1c452394b-kube-api-access-84557\") pod \"calico-apiserver-899bfd54-c6kfl\" (UID: \"d94b6c05-9941-4d00-9fa6-b2a1c452394b\") " pod="calico-apiserver/calico-apiserver-899bfd54-c6kfl"
Feb 13 19:50:14.642226 kubelet[3647]: I0213 19:50:14.641918    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91cd868f-3147-489a-9154-5e881b7a25ed-tigera-ca-bundle\") pod \"calico-kube-controllers-78f4c55485-sms9j\" (UID: \"91cd868f-3147-489a-9154-5e881b7a25ed\") " pod="calico-system/calico-kube-controllers-78f4c55485-sms9j"
Feb 13 19:50:14.644607 kubelet[3647]: I0213 19:50:14.643703    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/244383ed-be9f-4e55-9206-721fd35d9360-config-volume\") pod \"coredns-7db6d8ff4d-vbqfz\" (UID: \"244383ed-be9f-4e55-9206-721fd35d9360\") " pod="kube-system/coredns-7db6d8ff4d-vbqfz"
Feb 13 19:50:14.644607 kubelet[3647]: I0213 19:50:14.643795    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d94b6c05-9941-4d00-9fa6-b2a1c452394b-calico-apiserver-certs\") pod \"calico-apiserver-899bfd54-c6kfl\" (UID: \"d94b6c05-9941-4d00-9fa6-b2a1c452394b\") " pod="calico-apiserver/calico-apiserver-899bfd54-c6kfl"
Feb 13 19:50:14.644607 kubelet[3647]: I0213 19:50:14.643864    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4q6q\" (UniqueName: \"kubernetes.io/projected/8a64d6ad-b799-442a-9fba-40d222a33c18-kube-api-access-w4q6q\") pod \"coredns-7db6d8ff4d-hjblt\" (UID: \"8a64d6ad-b799-442a-9fba-40d222a33c18\") " pod="kube-system/coredns-7db6d8ff4d-hjblt"
Feb 13 19:50:14.644607 kubelet[3647]: I0213 19:50:14.643952    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp84k\" (UniqueName: \"kubernetes.io/projected/91cd868f-3147-489a-9154-5e881b7a25ed-kube-api-access-kp84k\") pod \"calico-kube-controllers-78f4c55485-sms9j\" (UID: \"91cd868f-3147-489a-9154-5e881b7a25ed\") " pod="calico-system/calico-kube-controllers-78f4c55485-sms9j"
Feb 13 19:50:14.644607 kubelet[3647]: I0213 19:50:14.644001    3647 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjghm\" (UniqueName: \"kubernetes.io/projected/2b60d0c4-0403-4702-b85f-ae7526b9b83b-kube-api-access-gjghm\") pod \"calico-apiserver-899bfd54-5wp2p\" (UID: \"2b60d0c4-0403-4702-b85f-ae7526b9b83b\") " pod="calico-apiserver/calico-apiserver-899bfd54-5wp2p"
Feb 13 19:50:14.882625 containerd[2037]: time="2025-02-13T19:50:14.882472297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hjblt,Uid:8a64d6ad-b799-442a-9fba-40d222a33c18,Namespace:kube-system,Attempt:0,}"
Feb 13 19:50:14.884454 containerd[2037]: time="2025-02-13T19:50:14.884280445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vbqfz,Uid:244383ed-be9f-4e55-9206-721fd35d9360,Namespace:kube-system,Attempt:0,}"
Feb 13 19:50:14.897884 containerd[2037]: time="2025-02-13T19:50:14.897813661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899bfd54-c6kfl,Uid:d94b6c05-9941-4d00-9fa6-b2a1c452394b,Namespace:calico-apiserver,Attempt:0,}"
Feb 13 19:50:14.916502 containerd[2037]: time="2025-02-13T19:50:14.916314685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f4c55485-sms9j,Uid:91cd868f-3147-489a-9154-5e881b7a25ed,Namespace:calico-system,Attempt:0,}"
Feb 13 19:50:14.917124 containerd[2037]: time="2025-02-13T19:50:14.917082685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899bfd54-5wp2p,Uid:2b60d0c4-0403-4702-b85f-ae7526b9b83b,Namespace:calico-apiserver,Attempt:0,}"
Feb 13 19:50:15.370225 containerd[2037]: time="2025-02-13T19:50:15.370152696Z" level=info msg="shim disconnected" id=a576a26bccbe9ea8fddf53d235428d0587a1633c32a29521eaf1c871f4945e77 namespace=k8s.io
Feb 13 19:50:15.371078 containerd[2037]: time="2025-02-13T19:50:15.370955496Z" level=warning msg="cleaning up after shim disconnected" id=a576a26bccbe9ea8fddf53d235428d0587a1633c32a29521eaf1c871f4945e77 namespace=k8s.io
Feb 13 19:50:15.371078 containerd[2037]: time="2025-02-13T19:50:15.371002560Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:50:15.482815 containerd[2037]: time="2025-02-13T19:50:15.481769916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Feb 13 19:50:15.738384 containerd[2037]: time="2025-02-13T19:50:15.737490254Z" level=error msg="Failed to destroy network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.745219 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90-shm.mount: Deactivated successfully.
Feb 13 19:50:15.749917 containerd[2037]: time="2025-02-13T19:50:15.747658274Z" level=error msg="encountered an error cleaning up failed sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.752105 containerd[2037]: time="2025-02-13T19:50:15.749885006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hjblt,Uid:8a64d6ad-b799-442a-9fba-40d222a33c18,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.753001 kubelet[3647]: E0213 19:50:15.752386    3647 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.753001 kubelet[3647]: E0213 19:50:15.752497    3647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hjblt"
Feb 13 19:50:15.753001 kubelet[3647]: E0213 19:50:15.752534    3647 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hjblt"
Feb 13 19:50:15.753817 kubelet[3647]: E0213 19:50:15.752604    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hjblt_kube-system(8a64d6ad-b799-442a-9fba-40d222a33c18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hjblt_kube-system(8a64d6ad-b799-442a-9fba-40d222a33c18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hjblt" podUID="8a64d6ad-b799-442a-9fba-40d222a33c18"
Feb 13 19:50:15.760834 containerd[2037]: time="2025-02-13T19:50:15.759581162Z" level=error msg="Failed to destroy network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.766813 containerd[2037]: time="2025-02-13T19:50:15.766734026Z" level=error msg="encountered an error cleaning up failed sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.767091 containerd[2037]: time="2025-02-13T19:50:15.766846766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vbqfz,Uid:244383ed-be9f-4e55-9206-721fd35d9360,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.768303 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6-shm.mount: Deactivated successfully.
Feb 13 19:50:15.768580 kubelet[3647]: E0213 19:50:15.768437    3647 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.768994 kubelet[3647]: E0213 19:50:15.768812    3647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vbqfz"
Feb 13 19:50:15.768994 kubelet[3647]: E0213 19:50:15.768891    3647 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vbqfz"
Feb 13 19:50:15.769169 kubelet[3647]: E0213 19:50:15.769056    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vbqfz_kube-system(244383ed-be9f-4e55-9206-721fd35d9360)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vbqfz_kube-system(244383ed-be9f-4e55-9206-721fd35d9360)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vbqfz" podUID="244383ed-be9f-4e55-9206-721fd35d9360"
Feb 13 19:50:15.769867 containerd[2037]: time="2025-02-13T19:50:15.769654598Z" level=error msg="Failed to destroy network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.778569 containerd[2037]: time="2025-02-13T19:50:15.778218782Z" level=error msg="encountered an error cleaning up failed sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.779262 containerd[2037]: time="2025-02-13T19:50:15.778908638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899bfd54-5wp2p,Uid:2b60d0c4-0403-4702-b85f-ae7526b9b83b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.779445 containerd[2037]: time="2025-02-13T19:50:15.779243594Z" level=error msg="Failed to destroy network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.780756 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af-shm.mount: Deactivated successfully.
Feb 13 19:50:15.783530 containerd[2037]: time="2025-02-13T19:50:15.781476746Z" level=error msg="encountered an error cleaning up failed sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.783530 containerd[2037]: time="2025-02-13T19:50:15.781560134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f4c55485-sms9j,Uid:91cd868f-3147-489a-9154-5e881b7a25ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.785784 kubelet[3647]: E0213 19:50:15.784087    3647 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.785784 kubelet[3647]: E0213 19:50:15.784169    3647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f4c55485-sms9j"
Feb 13 19:50:15.785784 kubelet[3647]: E0213 19:50:15.784208    3647 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f4c55485-sms9j"
Feb 13 19:50:15.786134 kubelet[3647]: E0213 19:50:15.784283    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78f4c55485-sms9j_calico-system(91cd868f-3147-489a-9154-5e881b7a25ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78f4c55485-sms9j_calico-system(91cd868f-3147-489a-9154-5e881b7a25ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f4c55485-sms9j" podUID="91cd868f-3147-489a-9154-5e881b7a25ed"
Feb 13 19:50:15.790301 kubelet[3647]: E0213 19:50:15.789146    3647 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.791203 kubelet[3647]: E0213 19:50:15.791125    3647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-899bfd54-5wp2p"
Feb 13 19:50:15.791330 kubelet[3647]: E0213 19:50:15.791220    3647 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-899bfd54-5wp2p"
Feb 13 19:50:15.791400 kubelet[3647]: E0213 19:50:15.791324    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-899bfd54-5wp2p_calico-apiserver(2b60d0c4-0403-4702-b85f-ae7526b9b83b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-899bfd54-5wp2p_calico-apiserver(2b60d0c4-0403-4702-b85f-ae7526b9b83b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-899bfd54-5wp2p" podUID="2b60d0c4-0403-4702-b85f-ae7526b9b83b"
Feb 13 19:50:15.794815 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854-shm.mount: Deactivated successfully.
Feb 13 19:50:15.815670 containerd[2037]: time="2025-02-13T19:50:15.815587154Z" level=error msg="Failed to destroy network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.816764 containerd[2037]: time="2025-02-13T19:50:15.816693806Z" level=error msg="encountered an error cleaning up failed sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.816994 containerd[2037]: time="2025-02-13T19:50:15.816936182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899bfd54-c6kfl,Uid:d94b6c05-9941-4d00-9fa6-b2a1c452394b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.817570 kubelet[3647]: E0213 19:50:15.817516    3647 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:15.817692 kubelet[3647]: E0213 19:50:15.817598    3647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-899bfd54-c6kfl"
Feb 13 19:50:15.817692 kubelet[3647]: E0213 19:50:15.817633    3647 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-899bfd54-c6kfl"
Feb 13 19:50:15.817812 kubelet[3647]: E0213 19:50:15.817707    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-899bfd54-c6kfl_calico-apiserver(d94b6c05-9941-4d00-9fa6-b2a1c452394b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-899bfd54-c6kfl_calico-apiserver(d94b6c05-9941-4d00-9fa6-b2a1c452394b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-899bfd54-c6kfl" podUID="d94b6c05-9941-4d00-9fa6-b2a1c452394b"
Feb 13 19:50:16.180823 containerd[2037]: time="2025-02-13T19:50:16.180733608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6np6,Uid:a53a8aa8-6a9b-4643-89f4-26162e962c9a,Namespace:calico-system,Attempt:0,}"
Feb 13 19:50:16.279045 containerd[2037]: time="2025-02-13T19:50:16.278964288Z" level=error msg="Failed to destroy network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.280090 containerd[2037]: time="2025-02-13T19:50:16.279884436Z" level=error msg="encountered an error cleaning up failed sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.280204 containerd[2037]: time="2025-02-13T19:50:16.279988188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6np6,Uid:a53a8aa8-6a9b-4643-89f4-26162e962c9a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.280446 kubelet[3647]: E0213 19:50:16.280375    3647 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.280569 kubelet[3647]: E0213 19:50:16.280475    3647 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6np6"
Feb 13 19:50:16.280569 kubelet[3647]: E0213 19:50:16.280510    3647 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6np6"
Feb 13 19:50:16.280690 kubelet[3647]: E0213 19:50:16.280586    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6np6_calico-system(a53a8aa8-6a9b-4643-89f4-26162e962c9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6np6_calico-system(a53a8aa8-6a9b-4643-89f4-26162e962c9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6np6" podUID="a53a8aa8-6a9b-4643-89f4-26162e962c9a"
Feb 13 19:50:16.440333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f-shm.mount: Deactivated successfully.
Feb 13 19:50:16.473706 kubelet[3647]: I0213 19:50:16.472708    3647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:16.476200 containerd[2037]: time="2025-02-13T19:50:16.474951973Z" level=info msg="StopPodSandbox for \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\""
Feb 13 19:50:16.476200 containerd[2037]: time="2025-02-13T19:50:16.475306405Z" level=info msg="Ensure that sandbox d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af in task-service has been cleanup successfully"
Feb 13 19:50:16.483351 kubelet[3647]: I0213 19:50:16.483262    3647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:16.488829 containerd[2037]: time="2025-02-13T19:50:16.486994945Z" level=info msg="StopPodSandbox for \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\""
Feb 13 19:50:16.491290 containerd[2037]: time="2025-02-13T19:50:16.489918925Z" level=info msg="Ensure that sandbox fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854 in task-service has been cleanup successfully"
Feb 13 19:50:16.494428 kubelet[3647]: I0213 19:50:16.492752    3647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:16.495989 kubelet[3647]: I0213 19:50:16.495938    3647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 19:50:16.498163 containerd[2037]: time="2025-02-13T19:50:16.498117865Z" level=info msg="StopPodSandbox for \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\""
Feb 13 19:50:16.499915 containerd[2037]: time="2025-02-13T19:50:16.499845685Z" level=info msg="Ensure that sandbox 7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f in task-service has been cleanup successfully"
Feb 13 19:50:16.517745 kubelet[3647]: I0213 19:50:16.516753    3647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:16.519100 containerd[2037]: time="2025-02-13T19:50:16.518271457Z" level=info msg="StopPodSandbox for \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\""
Feb 13 19:50:16.519100 containerd[2037]: time="2025-02-13T19:50:16.518623597Z" level=info msg="Ensure that sandbox 5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6 in task-service has been cleanup successfully"
Feb 13 19:50:16.604147 kubelet[3647]: I0213 19:50:16.604093    3647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:16.609549 containerd[2037]: time="2025-02-13T19:50:16.609488606Z" level=info msg="StopPodSandbox for \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\""
Feb 13 19:50:16.609840 containerd[2037]: time="2025-02-13T19:50:16.609791210Z" level=info msg="Ensure that sandbox 13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90 in task-service has been cleanup successfully"
Feb 13 19:50:16.619432 kubelet[3647]: I0213 19:50:16.619394    3647 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:16.622103 containerd[2037]: time="2025-02-13T19:50:16.621158402Z" level=info msg="StopPodSandbox for \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\""
Feb 13 19:50:16.622103 containerd[2037]: time="2025-02-13T19:50:16.621448418Z" level=info msg="Ensure that sandbox a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d in task-service has been cleanup successfully"
Feb 13 19:50:16.712057 containerd[2037]: time="2025-02-13T19:50:16.711331382Z" level=error msg="StopPodSandbox for \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\" failed" error="failed to destroy network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.713726 kubelet[3647]: E0213 19:50:16.713671    3647 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:16.714400 kubelet[3647]: E0213 19:50:16.714181    3647 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"}
Feb 13 19:50:16.714628 kubelet[3647]: E0213 19:50:16.714598    3647 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91cd868f-3147-489a-9154-5e881b7a25ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 19:50:16.714929 kubelet[3647]: E0213 19:50:16.714888    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91cd868f-3147-489a-9154-5e881b7a25ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f4c55485-sms9j" podUID="91cd868f-3147-489a-9154-5e881b7a25ed"
Feb 13 19:50:16.719169 containerd[2037]: time="2025-02-13T19:50:16.718661150Z" level=error msg="StopPodSandbox for \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\" failed" error="failed to destroy network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.719751 kubelet[3647]: E0213 19:50:16.719520    3647 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:16.719751 kubelet[3647]: E0213 19:50:16.719589    3647 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"}
Feb 13 19:50:16.719751 kubelet[3647]: E0213 19:50:16.719646    3647 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"244383ed-be9f-4e55-9206-721fd35d9360\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 19:50:16.719751 kubelet[3647]: E0213 19:50:16.719691    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"244383ed-be9f-4e55-9206-721fd35d9360\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vbqfz" podUID="244383ed-be9f-4e55-9206-721fd35d9360"
Feb 13 19:50:16.735380 containerd[2037]: time="2025-02-13T19:50:16.735217971Z" level=error msg="StopPodSandbox for \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\" failed" error="failed to destroy network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.736002 kubelet[3647]: E0213 19:50:16.735780    3647 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:16.736002 kubelet[3647]: E0213 19:50:16.735849    3647 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"}
Feb 13 19:50:16.736002 kubelet[3647]: E0213 19:50:16.735912    3647 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d94b6c05-9941-4d00-9fa6-b2a1c452394b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 19:50:16.736002 kubelet[3647]: E0213 19:50:16.735953    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d94b6c05-9941-4d00-9fa6-b2a1c452394b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-899bfd54-c6kfl" podUID="d94b6c05-9941-4d00-9fa6-b2a1c452394b"
Feb 13 19:50:16.744707 containerd[2037]: time="2025-02-13T19:50:16.744246795Z" level=error msg="StopPodSandbox for \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\" failed" error="failed to destroy network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.745141 kubelet[3647]: E0213 19:50:16.744903    3647 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:16.745141 kubelet[3647]: E0213 19:50:16.745071    3647 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"}
Feb 13 19:50:16.745598 kubelet[3647]: E0213 19:50:16.745492    3647 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b60d0c4-0403-4702-b85f-ae7526b9b83b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 19:50:16.745598 kubelet[3647]: E0213 19:50:16.745545    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b60d0c4-0403-4702-b85f-ae7526b9b83b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-899bfd54-5wp2p" podUID="2b60d0c4-0403-4702-b85f-ae7526b9b83b"
Feb 13 19:50:16.763475 containerd[2037]: time="2025-02-13T19:50:16.763209675Z" level=error msg="StopPodSandbox for \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\" failed" error="failed to destroy network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.764284 kubelet[3647]: E0213 19:50:16.764221    3647 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:16.764284 kubelet[3647]: E0213 19:50:16.764298    3647 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"}
Feb 13 19:50:16.765224 kubelet[3647]: E0213 19:50:16.764359    3647 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a53a8aa8-6a9b-4643-89f4-26162e962c9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 19:50:16.765224 kubelet[3647]: E0213 19:50:16.764404    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a53a8aa8-6a9b-4643-89f4-26162e962c9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6np6" podUID="a53a8aa8-6a9b-4643-89f4-26162e962c9a"
Feb 13 19:50:16.772539 containerd[2037]: time="2025-02-13T19:50:16.772479483Z" level=error msg="StopPodSandbox for \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\" failed" error="failed to destroy network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 19:50:16.773244 kubelet[3647]: E0213 19:50:16.773075    3647 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:16.773244 kubelet[3647]: E0213 19:50:16.773167    3647 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"}
Feb 13 19:50:16.773786 kubelet[3647]: E0213 19:50:16.773600    3647 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a64d6ad-b799-442a-9fba-40d222a33c18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 19:50:16.773786 kubelet[3647]: E0213 19:50:16.773689    3647 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a64d6ad-b799-442a-9fba-40d222a33c18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hjblt" podUID="8a64d6ad-b799-442a-9fba-40d222a33c18"
Feb 13 19:50:21.945604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334785316.mount: Deactivated successfully.
Feb 13 19:50:22.022768 containerd[2037]: time="2025-02-13T19:50:22.022375193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:22.023928 containerd[2037]: time="2025-02-13T19:50:22.023870285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762"
Feb 13 19:50:22.025359 containerd[2037]: time="2025-02-13T19:50:22.024892649Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:22.029592 containerd[2037]: time="2025-02-13T19:50:22.029482349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:22.031066 containerd[2037]: time="2025-02-13T19:50:22.030866369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.544661013s"
Feb 13 19:50:22.031066 containerd[2037]: time="2025-02-13T19:50:22.030930089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\""
Feb 13 19:50:22.071488 containerd[2037]: time="2025-02-13T19:50:22.071409533Z" level=info msg="CreateContainer within sandbox \"9050a38fe183d54b641db5b2bc78631890d876169e2b08e231beb7dec28625a0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Feb 13 19:50:22.108421 containerd[2037]: time="2025-02-13T19:50:22.108358121Z" level=info msg="CreateContainer within sandbox \"9050a38fe183d54b641db5b2bc78631890d876169e2b08e231beb7dec28625a0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9e8f1263997bfa83f446ec00cb804dda595bc0a7bb4a4c798c27bd44629c8c00\""
Feb 13 19:50:22.109874 containerd[2037]: time="2025-02-13T19:50:22.109651649Z" level=info msg="StartContainer for \"9e8f1263997bfa83f446ec00cb804dda595bc0a7bb4a4c798c27bd44629c8c00\""
Feb 13 19:50:22.215764 containerd[2037]: time="2025-02-13T19:50:22.215488650Z" level=info msg="StartContainer for \"9e8f1263997bfa83f446ec00cb804dda595bc0a7bb4a4c798c27bd44629c8c00\" returns successfully"
Feb 13 19:50:22.335166 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Feb 13 19:50:22.335359 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Feb 13 19:50:23.658584 kubelet[3647]: I0213 19:50:23.658523    3647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 19:50:24.377080 kernel: bpftool[4897]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Feb 13 19:50:24.683821 systemd-networkd[1600]: vxlan.calico: Link UP
Feb 13 19:50:24.683841 systemd-networkd[1600]: vxlan.calico: Gained carrier
Feb 13 19:50:24.689485 (udev-worker)[4751]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 19:50:24.729701 (udev-worker)[4750]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 19:50:26.247407 systemd-networkd[1600]: vxlan.calico: Gained IPv6LL
Feb 13 19:50:27.177773 containerd[2037]: time="2025-02-13T19:50:27.176303242Z" level=info msg="StopPodSandbox for \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\""
Feb 13 19:50:27.313402 kubelet[3647]: I0213 19:50:27.312512    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2n657" podStartSLOduration=6.651668299 podStartE2EDuration="24.312490067s" podCreationTimestamp="2025-02-13 19:50:03 +0000 UTC" firstStartedPulling="2025-02-13 19:50:04.372142981 +0000 UTC m=+23.454350973" lastFinishedPulling="2025-02-13 19:50:22.032964797 +0000 UTC m=+41.115172741" observedRunningTime="2025-02-13 19:50:22.69717896 +0000 UTC m=+41.779386916" watchObservedRunningTime="2025-02-13 19:50:27.312490067 +0000 UTC m=+46.394698035"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.313 [INFO][4983] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.315 [INFO][4983] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" iface="eth0" netns="/var/run/netns/cni-1d49e497-5e3a-3edd-6891-d3079e3ebe26"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.316 [INFO][4983] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" iface="eth0" netns="/var/run/netns/cni-1d49e497-5e3a-3edd-6891-d3079e3ebe26"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.319 [INFO][4983] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" iface="eth0" netns="/var/run/netns/cni-1d49e497-5e3a-3edd-6891-d3079e3ebe26"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.319 [INFO][4983] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.319 [INFO][4983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.373 [INFO][4990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" HandleID="k8s-pod-network.5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.375 [INFO][4990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.375 [INFO][4990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.391 [WARNING][4990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" HandleID="k8s-pod-network.5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.391 [INFO][4990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" HandleID="k8s-pod-network.5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.395 [INFO][4990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:27.403588 containerd[2037]: 2025-02-13 19:50:27.400 [INFO][4983] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:27.406134 containerd[2037]: time="2025-02-13T19:50:27.404830620Z" level=info msg="TearDown network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\" successfully"
Feb 13 19:50:27.406305 containerd[2037]: time="2025-02-13T19:50:27.406143396Z" level=info msg="StopPodSandbox for \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\" returns successfully"
Feb 13 19:50:27.413979 containerd[2037]: time="2025-02-13T19:50:27.413685708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vbqfz,Uid:244383ed-be9f-4e55-9206-721fd35d9360,Namespace:kube-system,Attempt:1,}"
Feb 13 19:50:27.416699 systemd[1]: run-netns-cni\x2d1d49e497\x2d5e3a\x2d3edd\x2d6891\x2dd3079e3ebe26.mount: Deactivated successfully.
Feb 13 19:50:27.703321 systemd-networkd[1600]: cali97ff637fe97: Link UP
Feb 13 19:50:27.705074 systemd-networkd[1600]: cali97ff637fe97: Gained carrier
Feb 13 19:50:27.710602 (udev-worker)[4931]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.563 [INFO][4997] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0 coredns-7db6d8ff4d- kube-system  244383ed-be9f-4e55-9206-721fd35d9360 757 0 2025-02-13 19:49:54 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ip-172-31-22-232  coredns-7db6d8ff4d-vbqfz eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali97ff637fe97  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vbqfz" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.564 [INFO][4997] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vbqfz" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.628 [INFO][5009] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" HandleID="k8s-pod-network.4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.646 [INFO][5009] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" HandleID="k8s-pod-network.4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cbe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-232", "pod":"coredns-7db6d8ff4d-vbqfz", "timestamp":"2025-02-13 19:50:27.628558813 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.646 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.646 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.646 [INFO][5009] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232'
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.649 [INFO][5009] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" host="ip-172-31-22-232"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.655 [INFO][5009] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-232"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.663 [INFO][5009] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.666 [INFO][5009] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.670 [INFO][5009] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.670 [INFO][5009] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" host="ip-172-31-22-232"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.674 [INFO][5009] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.681 [INFO][5009] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" host="ip-172-31-22-232"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.691 [INFO][5009] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.65/26] block=192.168.42.64/26 handle="k8s-pod-network.4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" host="ip-172-31-22-232"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.691 [INFO][5009] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.65/26] handle="k8s-pod-network.4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" host="ip-172-31-22-232"
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.691 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:27.740535 containerd[2037]: 2025-02-13 19:50:27.691 [INFO][5009] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.65/26] IPv6=[] ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" HandleID="k8s-pod-network.4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.742453 containerd[2037]: 2025-02-13 19:50:27.697 [INFO][4997] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vbqfz" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"244383ed-be9f-4e55-9206-721fd35d9360", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"coredns-7db6d8ff4d-vbqfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97ff637fe97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:27.742453 containerd[2037]: 2025-02-13 19:50:27.697 [INFO][4997] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.65/32] ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vbqfz" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.742453 containerd[2037]: 2025-02-13 19:50:27.697 [INFO][4997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97ff637fe97 ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vbqfz" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.742453 containerd[2037]: 2025-02-13 19:50:27.706 [INFO][4997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vbqfz" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.742453 containerd[2037]: 2025-02-13 19:50:27.707 [INFO][4997] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vbqfz" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"244383ed-be9f-4e55-9206-721fd35d9360", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125", Pod:"coredns-7db6d8ff4d-vbqfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97ff637fe97", MAC:"72:43:73:01:c3:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:27.742453 containerd[2037]: 2025-02-13 19:50:27.732 [INFO][4997] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vbqfz" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:27.779099 containerd[2037]: time="2025-02-13T19:50:27.778481845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:50:27.779099 containerd[2037]: time="2025-02-13T19:50:27.778643917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:50:27.779099 containerd[2037]: time="2025-02-13T19:50:27.778699093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:27.779099 containerd[2037]: time="2025-02-13T19:50:27.778964965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:27.835985 systemd[1]: run-containerd-runc-k8s.io-4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125-runc.XiZqX5.mount: Deactivated successfully.
Feb 13 19:50:27.889395 containerd[2037]: time="2025-02-13T19:50:27.889304630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vbqfz,Uid:244383ed-be9f-4e55-9206-721fd35d9360,Namespace:kube-system,Attempt:1,} returns sandbox id \"4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125\""
Feb 13 19:50:27.897754 containerd[2037]: time="2025-02-13T19:50:27.897539498Z" level=info msg="CreateContainer within sandbox \"4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 19:50:27.923416 containerd[2037]: time="2025-02-13T19:50:27.923240666Z" level=info msg="CreateContainer within sandbox \"4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6d6369a0cafb5431c641645765b72a8388ab10295ee55aad8eb0a0316ed543e\""
Feb 13 19:50:27.925762 containerd[2037]: time="2025-02-13T19:50:27.924010334Z" level=info msg="StartContainer for \"a6d6369a0cafb5431c641645765b72a8388ab10295ee55aad8eb0a0316ed543e\""
Feb 13 19:50:28.016551 containerd[2037]: time="2025-02-13T19:50:28.016226111Z" level=info msg="StartContainer for \"a6d6369a0cafb5431c641645765b72a8388ab10295ee55aad8eb0a0316ed543e\" returns successfully"
Feb 13 19:50:28.175058 containerd[2037]: time="2025-02-13T19:50:28.174987407Z" level=info msg="StopPodSandbox for \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\""
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.263 [INFO][5117] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.264 [INFO][5117] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" iface="eth0" netns="/var/run/netns/cni-01c5f66c-a845-0112-8ce3-0b42edb8e75a"
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.264 [INFO][5117] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" iface="eth0" netns="/var/run/netns/cni-01c5f66c-a845-0112-8ce3-0b42edb8e75a"
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.264 [INFO][5117] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" iface="eth0" netns="/var/run/netns/cni-01c5f66c-a845-0112-8ce3-0b42edb8e75a"
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.265 [INFO][5117] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.265 [INFO][5117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.345 [INFO][5123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" HandleID="k8s-pod-network.a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.346 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.346 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.357 [WARNING][5123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" HandleID="k8s-pod-network.a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.357 [INFO][5123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" HandleID="k8s-pod-network.a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.360 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:28.366930 containerd[2037]: 2025-02-13 19:50:28.363 [INFO][5117] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:28.368219 containerd[2037]: time="2025-02-13T19:50:28.367762524Z" level=info msg="TearDown network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\" successfully"
Feb 13 19:50:28.368219 containerd[2037]: time="2025-02-13T19:50:28.367842636Z" level=info msg="StopPodSandbox for \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\" returns successfully"
Feb 13 19:50:28.369921 containerd[2037]: time="2025-02-13T19:50:28.369806112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6np6,Uid:a53a8aa8-6a9b-4643-89f4-26162e962c9a,Namespace:calico-system,Attempt:1,}"
Feb 13 19:50:28.428510 systemd[1]: run-netns-cni\x2d01c5f66c\x2da845\x2d0112\x2d8ce3\x2d0b42edb8e75a.mount: Deactivated successfully.
Feb 13 19:50:28.694836 systemd[1]: Started sshd@7-172.31.22.232:22-139.178.89.65:59880.service - OpenSSH per-connection server daemon (139.178.89.65:59880).
Feb 13 19:50:28.736361 kubelet[3647]: I0213 19:50:28.726498    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vbqfz" podStartSLOduration=34.726447146 podStartE2EDuration="34.726447146s" podCreationTimestamp="2025-02-13 19:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:28.724102742 +0000 UTC m=+47.806310686" watchObservedRunningTime="2025-02-13 19:50:28.726447146 +0000 UTC m=+47.808655114"
Feb 13 19:50:28.803463 systemd-networkd[1600]: cali804f8faff4e: Link UP
Feb 13 19:50:28.807166 systemd-networkd[1600]: cali804f8faff4e: Gained carrier
Feb 13 19:50:28.808996 systemd-networkd[1600]: cali97ff637fe97: Gained IPv6LL
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.553 [INFO][5131] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0 csi-node-driver- calico-system  a53a8aa8-6a9b-4643-89f4-26162e962c9a 768 0 2025-02-13 19:50:03 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  ip-172-31-22-232  csi-node-driver-k6np6 eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali804f8faff4e  [] []}} ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Namespace="calico-system" Pod="csi-node-driver-k6np6" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--k6np6-"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.553 [INFO][5131] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Namespace="calico-system" Pod="csi-node-driver-k6np6" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.649 [INFO][5145] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" HandleID="k8s-pod-network.ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.666 [INFO][5145] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" HandleID="k8s-pod-network.ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000fbbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-232", "pod":"csi-node-driver-k6np6", "timestamp":"2025-02-13 19:50:28.64963307 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.666 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.666 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.666 [INFO][5145] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232'
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.669 [INFO][5145] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" host="ip-172-31-22-232"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.677 [INFO][5145] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-232"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.701 [INFO][5145] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.704 [INFO][5145] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.710 [INFO][5145] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.710 [INFO][5145] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" host="ip-172-31-22-232"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.717 [INFO][5145] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.743 [INFO][5145] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" host="ip-172-31-22-232"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.764 [INFO][5145] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.66/26] block=192.168.42.64/26 handle="k8s-pod-network.ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" host="ip-172-31-22-232"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.764 [INFO][5145] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.66/26] handle="k8s-pod-network.ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" host="ip-172-31-22-232"
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.764 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:28.846139 containerd[2037]: 2025-02-13 19:50:28.764 [INFO][5145] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.66/26] IPv6=[] ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" HandleID="k8s-pod-network.ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.848771 containerd[2037]: 2025-02-13 19:50:28.771 [INFO][5131] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Namespace="calico-system" Pod="csi-node-driver-k6np6" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a53a8aa8-6a9b-4643-89f4-26162e962c9a", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"csi-node-driver-k6np6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali804f8faff4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:28.848771 containerd[2037]: 2025-02-13 19:50:28.771 [INFO][5131] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.66/32] ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Namespace="calico-system" Pod="csi-node-driver-k6np6" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.848771 containerd[2037]: 2025-02-13 19:50:28.772 [INFO][5131] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali804f8faff4e ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Namespace="calico-system" Pod="csi-node-driver-k6np6" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.848771 containerd[2037]: 2025-02-13 19:50:28.802 [INFO][5131] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Namespace="calico-system" Pod="csi-node-driver-k6np6" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.848771 containerd[2037]: 2025-02-13 19:50:28.810 [INFO][5131] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Namespace="calico-system" Pod="csi-node-driver-k6np6" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a53a8aa8-6a9b-4643-89f4-26162e962c9a", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c", Pod:"csi-node-driver-k6np6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali804f8faff4e", MAC:"76:91:71:ac:37:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:28.848771 containerd[2037]: 2025-02-13 19:50:28.841 [INFO][5131] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c" Namespace="calico-system" Pod="csi-node-driver-k6np6" WorkloadEndpoint="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:28.901793 containerd[2037]: time="2025-02-13T19:50:28.901503567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:50:28.902341 containerd[2037]: time="2025-02-13T19:50:28.901686195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:50:28.902658 containerd[2037]: time="2025-02-13T19:50:28.902494647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:28.903479 containerd[2037]: time="2025-02-13T19:50:28.903286671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:28.959764 sshd[5152]: Accepted publickey for core from 139.178.89.65 port 59880 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:50:28.966126 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:50:28.983548 systemd-logind[2018]: New session 8 of user core.
Feb 13 19:50:28.990647 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 19:50:29.039535 containerd[2037]: time="2025-02-13T19:50:29.036473268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6np6,Uid:a53a8aa8-6a9b-4643-89f4-26162e962c9a,Namespace:calico-system,Attempt:1,} returns sandbox id \"ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c\""
Feb 13 19:50:29.072499 containerd[2037]: time="2025-02-13T19:50:29.072412980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Feb 13 19:50:29.182259 containerd[2037]: time="2025-02-13T19:50:29.180906720Z" level=info msg="StopPodSandbox for \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\""
Feb 13 19:50:29.185731 containerd[2037]: time="2025-02-13T19:50:29.185668884Z" level=info msg="StopPodSandbox for \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\""
Feb 13 19:50:29.186745 containerd[2037]: time="2025-02-13T19:50:29.183162588Z" level=info msg="StopPodSandbox for \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\""
Feb 13 19:50:29.439248 sshd[5152]: pam_unix(sshd:session): session closed for user core
Feb 13 19:50:29.459467 systemd[1]: sshd@7-172.31.22.232:22-139.178.89.65:59880.service: Deactivated successfully.
Feb 13 19:50:29.477525 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 19:50:29.481641 systemd-logind[2018]: Session 8 logged out. Waiting for processes to exit.
Feb 13 19:50:29.492150 systemd-logind[2018]: Removed session 8.
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.441 [INFO][5261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.450 [INFO][5261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" iface="eth0" netns="/var/run/netns/cni-a8ac61f0-040e-e95d-6d17-39eb9e336859"
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.456 [INFO][5261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" iface="eth0" netns="/var/run/netns/cni-a8ac61f0-040e-e95d-6d17-39eb9e336859"
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.457 [INFO][5261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" iface="eth0" netns="/var/run/netns/cni-a8ac61f0-040e-e95d-6d17-39eb9e336859"
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.457 [INFO][5261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.470 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.572 [INFO][5287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" HandleID="k8s-pod-network.d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.573 [INFO][5287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.573 [INFO][5287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.591 [WARNING][5287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" HandleID="k8s-pod-network.d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.591 [INFO][5287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" HandleID="k8s-pod-network.d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.596 [INFO][5287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:29.606455 containerd[2037]: 2025-02-13 19:50:29.602 [INFO][5261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:29.616809 containerd[2037]: time="2025-02-13T19:50:29.614434718Z" level=info msg="TearDown network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\" successfully"
Feb 13 19:50:29.616809 containerd[2037]: time="2025-02-13T19:50:29.614482154Z" level=info msg="StopPodSandbox for \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\" returns successfully"
Feb 13 19:50:29.619496 systemd[1]: run-netns-cni\x2da8ac61f0\x2d040e\x2de95d\x2d6d17\x2d39eb9e336859.mount: Deactivated successfully.
Feb 13 19:50:29.633709 containerd[2037]: time="2025-02-13T19:50:29.633301407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899bfd54-5wp2p,Uid:2b60d0c4-0403-4702-b85f-ae7526b9b83b,Namespace:calico-apiserver,Attempt:1,}"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.521 [INFO][5262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.521 [INFO][5262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" iface="eth0" netns="/var/run/netns/cni-2cd4b98b-c8dc-05ae-2848-013805ce36d4"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.522 [INFO][5262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" iface="eth0" netns="/var/run/netns/cni-2cd4b98b-c8dc-05ae-2848-013805ce36d4"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.522 [INFO][5262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" iface="eth0" netns="/var/run/netns/cni-2cd4b98b-c8dc-05ae-2848-013805ce36d4"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.522 [INFO][5262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.522 [INFO][5262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.599 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" HandleID="k8s-pod-network.13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.600 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.601 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.628 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" HandleID="k8s-pod-network.13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.628 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" HandleID="k8s-pod-network.13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.635 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:29.645284 containerd[2037]: 2025-02-13 19:50:29.641 [INFO][5262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:29.646853 containerd[2037]: time="2025-02-13T19:50:29.646637691Z" level=info msg="TearDown network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\" successfully"
Feb 13 19:50:29.646853 containerd[2037]: time="2025-02-13T19:50:29.646681779Z" level=info msg="StopPodSandbox for \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\" returns successfully"
Feb 13 19:50:29.652626 containerd[2037]: time="2025-02-13T19:50:29.652148403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hjblt,Uid:8a64d6ad-b799-442a-9fba-40d222a33c18,Namespace:kube-system,Attempt:1,}"
Feb 13 19:50:29.662732 systemd[1]: run-netns-cni\x2d2cd4b98b\x2dc8dc\x2d05ae\x2d2848\x2d013805ce36d4.mount: Deactivated successfully.
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.502 [INFO][5260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.519 [INFO][5260] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" iface="eth0" netns="/var/run/netns/cni-abe4d1b4-61b4-121e-240c-b7623c6d1d3d"
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.520 [INFO][5260] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" iface="eth0" netns="/var/run/netns/cni-abe4d1b4-61b4-121e-240c-b7623c6d1d3d"
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.522 [INFO][5260] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" iface="eth0" netns="/var/run/netns/cni-abe4d1b4-61b4-121e-240c-b7623c6d1d3d"
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.522 [INFO][5260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.522 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.639 [INFO][5291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" HandleID="k8s-pod-network.fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.639 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.639 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.658 [WARNING][5291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" HandleID="k8s-pod-network.fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.658 [INFO][5291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" HandleID="k8s-pod-network.fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.663 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:29.673423 containerd[2037]: 2025-02-13 19:50:29.668 [INFO][5260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:29.677815 containerd[2037]: time="2025-02-13T19:50:29.677749743Z" level=info msg="TearDown network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\" successfully"
Feb 13 19:50:29.677815 containerd[2037]: time="2025-02-13T19:50:29.677804811Z" level=info msg="StopPodSandbox for \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\" returns successfully"
Feb 13 19:50:29.681303 containerd[2037]: time="2025-02-13T19:50:29.680692923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f4c55485-sms9j,Uid:91cd868f-3147-489a-9154-5e881b7a25ed,Namespace:calico-system,Attempt:1,}"
Feb 13 19:50:29.681571 systemd[1]: run-netns-cni\x2dabe4d1b4\x2d61b4\x2d121e\x2d240c\x2db7623c6d1d3d.mount: Deactivated successfully.
Feb 13 19:50:30.002096 systemd-networkd[1600]: cali46c1bdbb3cd: Link UP
Feb 13 19:50:30.009387 systemd-networkd[1600]: cali46c1bdbb3cd: Gained carrier
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.793 [INFO][5305] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0 calico-apiserver-899bfd54- calico-apiserver  2b60d0c4-0403-4702-b85f-ae7526b9b83b 817 0 2025-02-13 19:50:04 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:899bfd54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ip-172-31-22-232  calico-apiserver-899bfd54-5wp2p eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali46c1bdbb3cd  [] []}} ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-5wp2p" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.793 [INFO][5305] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-5wp2p" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.883 [INFO][5336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" HandleID="k8s-pod-network.ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.912 [INFO][5336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" HandleID="k8s-pod-network.ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000283b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-232", "pod":"calico-apiserver-899bfd54-5wp2p", "timestamp":"2025-02-13 19:50:29.883543996 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.913 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.913 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.913 [INFO][5336] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232'
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.918 [INFO][5336] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" host="ip-172-31-22-232"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.930 [INFO][5336] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-232"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.940 [INFO][5336] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.944 [INFO][5336] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.949 [INFO][5336] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.949 [INFO][5336] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" host="ip-172-31-22-232"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.953 [INFO][5336] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.966 [INFO][5336] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" host="ip-172-31-22-232"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.982 [INFO][5336] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.67/26] block=192.168.42.64/26 handle="k8s-pod-network.ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" host="ip-172-31-22-232"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.983 [INFO][5336] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.67/26] handle="k8s-pod-network.ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" host="ip-172-31-22-232"
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.983 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:30.083389 containerd[2037]: 2025-02-13 19:50:29.983 [INFO][5336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.67/26] IPv6=[] ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" HandleID="k8s-pod-network.ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:30.084883 containerd[2037]: 2025-02-13 19:50:29.989 [INFO][5305] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-5wp2p" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0", GenerateName:"calico-apiserver-899bfd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b60d0c4-0403-4702-b85f-ae7526b9b83b", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899bfd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"calico-apiserver-899bfd54-5wp2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46c1bdbb3cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:30.084883 containerd[2037]: 2025-02-13 19:50:29.989 [INFO][5305] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.67/32] ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-5wp2p" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:30.084883 containerd[2037]: 2025-02-13 19:50:29.989 [INFO][5305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46c1bdbb3cd ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-5wp2p" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:30.084883 containerd[2037]: 2025-02-13 19:50:30.016 [INFO][5305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-5wp2p" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:30.084883 containerd[2037]: 2025-02-13 19:50:30.027 [INFO][5305] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-5wp2p" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0", GenerateName:"calico-apiserver-899bfd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b60d0c4-0403-4702-b85f-ae7526b9b83b", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899bfd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9", Pod:"calico-apiserver-899bfd54-5wp2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46c1bdbb3cd", MAC:"fe:ca:d5:ec:f4:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:30.084883 containerd[2037]: 2025-02-13 19:50:30.061 [INFO][5305] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-5wp2p" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:30.194183 containerd[2037]: time="2025-02-13T19:50:30.192766405Z" level=info msg="StopPodSandbox for \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\""
Feb 13 19:50:30.200877 systemd-networkd[1600]: calia1b3310afe0: Link UP
Feb 13 19:50:30.204649 systemd-networkd[1600]: calia1b3310afe0: Gained carrier
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:29.907 [INFO][5316] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0 calico-kube-controllers-78f4c55485- calico-system  91cd868f-3147-489a-9154-5e881b7a25ed 819 0 2025-02-13 19:50:04 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78f4c55485 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  ip-172-31-22-232  calico-kube-controllers-78f4c55485-sms9j eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] calia1b3310afe0  [] []}} ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Namespace="calico-system" Pod="calico-kube-controllers-78f4c55485-sms9j" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:29.907 [INFO][5316] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Namespace="calico-system" Pod="calico-kube-controllers-78f4c55485-sms9j" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.010 [INFO][5347] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" HandleID="k8s-pod-network.2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.065 [INFO][5347] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" HandleID="k8s-pod-network.2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001575a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-232", "pod":"calico-kube-controllers-78f4c55485-sms9j", "timestamp":"2025-02-13 19:50:30.010186044 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.065 [INFO][5347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.065 [INFO][5347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.065 [INFO][5347] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232'
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.082 [INFO][5347] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" host="ip-172-31-22-232"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.108 [INFO][5347] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-232"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.121 [INFO][5347] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.125 [INFO][5347] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.131 [INFO][5347] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.131 [INFO][5347] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" host="ip-172-31-22-232"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.134 [INFO][5347] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.144 [INFO][5347] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" host="ip-172-31-22-232"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.161 [INFO][5347] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.68/26] block=192.168.42.64/26 handle="k8s-pod-network.2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" host="ip-172-31-22-232"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.162 [INFO][5347] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.68/26] handle="k8s-pod-network.2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" host="ip-172-31-22-232"
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.163 [INFO][5347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:30.279593 containerd[2037]: 2025-02-13 19:50:30.165 [INFO][5347] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.68/26] IPv6=[] ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" HandleID="k8s-pod-network.2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:30.280713 containerd[2037]: 2025-02-13 19:50:30.178 [INFO][5316] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Namespace="calico-system" Pod="calico-kube-controllers-78f4c55485-sms9j" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0", GenerateName:"calico-kube-controllers-78f4c55485-", Namespace:"calico-system", SelfLink:"", UID:"91cd868f-3147-489a-9154-5e881b7a25ed", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f4c55485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"calico-kube-controllers-78f4c55485-sms9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1b3310afe0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:30.280713 containerd[2037]: 2025-02-13 19:50:30.179 [INFO][5316] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.68/32] ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Namespace="calico-system" Pod="calico-kube-controllers-78f4c55485-sms9j" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:30.280713 containerd[2037]: 2025-02-13 19:50:30.179 [INFO][5316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1b3310afe0 ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Namespace="calico-system" Pod="calico-kube-controllers-78f4c55485-sms9j" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:30.280713 containerd[2037]: 2025-02-13 19:50:30.205 [INFO][5316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Namespace="calico-system" Pod="calico-kube-controllers-78f4c55485-sms9j" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:30.280713 containerd[2037]: 2025-02-13 19:50:30.212 [INFO][5316] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Namespace="calico-system" Pod="calico-kube-controllers-78f4c55485-sms9j" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0", GenerateName:"calico-kube-controllers-78f4c55485-", Namespace:"calico-system", SelfLink:"", UID:"91cd868f-3147-489a-9154-5e881b7a25ed", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f4c55485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9", Pod:"calico-kube-controllers-78f4c55485-sms9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1b3310afe0", MAC:"8e:58:17:bc:c5:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:30.280713 containerd[2037]: 2025-02-13 19:50:30.258 [INFO][5316] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9" Namespace="calico-system" Pod="calico-kube-controllers-78f4c55485-sms9j" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:30.291679 containerd[2037]: time="2025-02-13T19:50:30.266760458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:50:30.291679 containerd[2037]: time="2025-02-13T19:50:30.266934482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:50:30.291679 containerd[2037]: time="2025-02-13T19:50:30.268710470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:30.291679 containerd[2037]: time="2025-02-13T19:50:30.273584822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:30.400867 systemd-networkd[1600]: calibe9a11802b3: Link UP
Feb 13 19:50:30.405074 systemd-networkd[1600]: calibe9a11802b3: Gained carrier
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:29.924 [INFO][5315] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0 coredns-7db6d8ff4d- kube-system  8a64d6ad-b799-442a-9fba-40d222a33c18 821 0 2025-02-13 19:49:54 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ip-172-31-22-232  coredns-7db6d8ff4d-hjblt eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] calibe9a11802b3  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjblt" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:29.924 [INFO][5315] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjblt" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.053 [INFO][5351] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" HandleID="k8s-pod-network.ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.098 [INFO][5351] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" HandleID="k8s-pod-network.ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-232", "pod":"coredns-7db6d8ff4d-hjblt", "timestamp":"2025-02-13 19:50:30.053013073 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.099 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.163 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.164 [INFO][5351] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232'
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.169 [INFO][5351] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" host="ip-172-31-22-232"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.190 [INFO][5351] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-232"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.225 [INFO][5351] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.229 [INFO][5351] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.247 [INFO][5351] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.247 [INFO][5351] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" host="ip-172-31-22-232"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.257 [INFO][5351] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.281 [INFO][5351] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" host="ip-172-31-22-232"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.323 [INFO][5351] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.69/26] block=192.168.42.64/26 handle="k8s-pod-network.ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" host="ip-172-31-22-232"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.323 [INFO][5351] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.69/26] handle="k8s-pod-network.ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" host="ip-172-31-22-232"
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.323 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:30.466657 containerd[2037]: 2025-02-13 19:50:30.325 [INFO][5351] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.69/26] IPv6=[] ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" HandleID="k8s-pod-network.ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:30.469730 containerd[2037]: 2025-02-13 19:50:30.370 [INFO][5315] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjblt" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8a64d6ad-b799-442a-9fba-40d222a33c18", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"coredns-7db6d8ff4d-hjblt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe9a11802b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:30.469730 containerd[2037]: 2025-02-13 19:50:30.377 [INFO][5315] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.69/32] ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjblt" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:30.469730 containerd[2037]: 2025-02-13 19:50:30.377 [INFO][5315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe9a11802b3 ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjblt" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:30.469730 containerd[2037]: 2025-02-13 19:50:30.410 [INFO][5315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjblt" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:30.469730 containerd[2037]: 2025-02-13 19:50:30.417 [INFO][5315] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjblt" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8a64d6ad-b799-442a-9fba-40d222a33c18", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e", Pod:"coredns-7db6d8ff4d-hjblt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe9a11802b3", MAC:"8e:e1:8e:7f:18:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:30.469730 containerd[2037]: 2025-02-13 19:50:30.459 [INFO][5315] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjblt" WorkloadEndpoint="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:30.512829 containerd[2037]: time="2025-02-13T19:50:30.511877763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:50:30.512829 containerd[2037]: time="2025-02-13T19:50:30.511987587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:50:30.513708 containerd[2037]: time="2025-02-13T19:50:30.513128175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:30.513708 containerd[2037]: time="2025-02-13T19:50:30.513396867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:30.569688 containerd[2037]: time="2025-02-13T19:50:30.568733559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:50:30.569688 containerd[2037]: time="2025-02-13T19:50:30.568861695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:50:30.569688 containerd[2037]: time="2025-02-13T19:50:30.568899951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:30.572270 containerd[2037]: time="2025-02-13T19:50:30.571272591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:30.751542 containerd[2037]: time="2025-02-13T19:50:30.750388948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899bfd54-5wp2p,Uid:2b60d0c4-0403-4702-b85f-ae7526b9b83b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9\""
Feb 13 19:50:30.792985 systemd-networkd[1600]: cali804f8faff4e: Gained IPv6LL
Feb 13 19:50:30.862201 containerd[2037]: time="2025-02-13T19:50:30.861833165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hjblt,Uid:8a64d6ad-b799-442a-9fba-40d222a33c18,Namespace:kube-system,Attempt:1,} returns sandbox id \"ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e\""
Feb 13 19:50:30.885728 containerd[2037]: time="2025-02-13T19:50:30.885645773Z" level=info msg="CreateContainer within sandbox \"ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 19:50:30.887070 containerd[2037]: time="2025-02-13T19:50:30.886993769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f4c55485-sms9j,Uid:91cd868f-3147-489a-9154-5e881b7a25ed,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9\""
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.682 [INFO][5419] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.683 [INFO][5419] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" iface="eth0" netns="/var/run/netns/cni-9b956835-98ce-32c0-3462-303d61e4b85b"
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.684 [INFO][5419] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" iface="eth0" netns="/var/run/netns/cni-9b956835-98ce-32c0-3462-303d61e4b85b"
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.686 [INFO][5419] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" iface="eth0" netns="/var/run/netns/cni-9b956835-98ce-32c0-3462-303d61e4b85b"
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.687 [INFO][5419] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.687 [INFO][5419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.846 [INFO][5522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" HandleID="k8s-pod-network.7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.852 [INFO][5522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.852 [INFO][5522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.886 [WARNING][5522] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" HandleID="k8s-pod-network.7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.887 [INFO][5522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" HandleID="k8s-pod-network.7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.891 [INFO][5522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:30.902223 containerd[2037]: 2025-02-13 19:50:30.897 [INFO][5419] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:30.907099 containerd[2037]: time="2025-02-13T19:50:30.904758845Z" level=info msg="TearDown network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\" successfully"
Feb 13 19:50:30.907099 containerd[2037]: time="2025-02-13T19:50:30.904807565Z" level=info msg="StopPodSandbox for \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\" returns successfully"
Feb 13 19:50:30.910230 systemd[1]: run-netns-cni\x2d9b956835\x2d98ce\x2d32c0\x2d3462\x2d303d61e4b85b.mount: Deactivated successfully.
Feb 13 19:50:30.912647 containerd[2037]: time="2025-02-13T19:50:30.911284817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899bfd54-c6kfl,Uid:d94b6c05-9941-4d00-9fa6-b2a1c452394b,Namespace:calico-apiserver,Attempt:1,}"
Feb 13 19:50:30.940213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3513280580.mount: Deactivated successfully.
Feb 13 19:50:30.950084 containerd[2037]: time="2025-02-13T19:50:30.949708781Z" level=info msg="CreateContainer within sandbox \"ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0e702e6edff7ce843271faba4616a9c7fa6fa48c99267e0efe8de7a923d8d5a\""
Feb 13 19:50:30.954224 containerd[2037]: time="2025-02-13T19:50:30.953361605Z" level=info msg="StartContainer for \"a0e702e6edff7ce843271faba4616a9c7fa6fa48c99267e0efe8de7a923d8d5a\""
Feb 13 19:50:31.158008 containerd[2037]: time="2025-02-13T19:50:31.157822574Z" level=info msg="StartContainer for \"a0e702e6edff7ce843271faba4616a9c7fa6fa48c99267e0efe8de7a923d8d5a\" returns successfully"
Feb 13 19:50:31.295240 containerd[2037]: time="2025-02-13T19:50:31.294064503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:31.296484 containerd[2037]: time="2025-02-13T19:50:31.295436163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730"
Feb 13 19:50:31.301211 containerd[2037]: time="2025-02-13T19:50:31.300985731Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:31.310235 containerd[2037]: time="2025-02-13T19:50:31.310148583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:31.312381 containerd[2037]: time="2025-02-13T19:50:31.311623611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.239139315s"
Feb 13 19:50:31.312381 containerd[2037]: time="2025-02-13T19:50:31.311733183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\""
Feb 13 19:50:31.317060 containerd[2037]: time="2025-02-13T19:50:31.315761631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Feb 13 19:50:31.318298 containerd[2037]: time="2025-02-13T19:50:31.318120495Z" level=info msg="CreateContainer within sandbox \"ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Feb 13 19:50:31.355913 containerd[2037]: time="2025-02-13T19:50:31.355605255Z" level=info msg="CreateContainer within sandbox \"ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5adb796650391562cdbf9bb38ef199e55128d1c0f5ea75f85d73b218162fa811\""
Feb 13 19:50:31.357741 containerd[2037]: time="2025-02-13T19:50:31.357674151Z" level=info msg="StartContainer for \"5adb796650391562cdbf9bb38ef199e55128d1c0f5ea75f85d73b218162fa811\""
Feb 13 19:50:31.408951 systemd-networkd[1600]: cali196cf0e0ac1: Link UP
Feb 13 19:50:31.415283 systemd-networkd[1600]: cali196cf0e0ac1: Gained carrier
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.094 [INFO][5570] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0 calico-apiserver-899bfd54- calico-apiserver  d94b6c05-9941-4d00-9fa6-b2a1c452394b 839 0 2025-02-13 19:50:04 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:899bfd54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ip-172-31-22-232  calico-apiserver-899bfd54-c6kfl eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali196cf0e0ac1  [] []}} ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-c6kfl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.094 [INFO][5570] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-c6kfl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.275 [INFO][5599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" HandleID="k8s-pod-network.9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.298 [INFO][5599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" HandleID="k8s-pod-network.9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005ae790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-232", "pod":"calico-apiserver-899bfd54-c6kfl", "timestamp":"2025-02-13 19:50:31.275344623 +0000 UTC"}, Hostname:"ip-172-31-22-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.298 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.298 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.298 [INFO][5599] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-232'
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.303 [INFO][5599] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" host="ip-172-31-22-232"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.313 [INFO][5599] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-232"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.328 [INFO][5599] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.332 [INFO][5599] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.340 [INFO][5599] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="ip-172-31-22-232"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.340 [INFO][5599] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" host="ip-172-31-22-232"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.345 [INFO][5599] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.355 [INFO][5599] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" host="ip-172-31-22-232"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.375 [INFO][5599] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.70/26] block=192.168.42.64/26 handle="k8s-pod-network.9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" host="ip-172-31-22-232"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.375 [INFO][5599] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.70/26] handle="k8s-pod-network.9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" host="ip-172-31-22-232"
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.375 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:31.460711 containerd[2037]: 2025-02-13 19:50:31.375 [INFO][5599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.70/26] IPv6=[] ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" HandleID="k8s-pod-network.9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:31.467426 containerd[2037]: 2025-02-13 19:50:31.389 [INFO][5570] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-c6kfl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0", GenerateName:"calico-apiserver-899bfd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94b6c05-9941-4d00-9fa6-b2a1c452394b", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899bfd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"", Pod:"calico-apiserver-899bfd54-c6kfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali196cf0e0ac1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:31.467426 containerd[2037]: 2025-02-13 19:50:31.390 [INFO][5570] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.70/32] ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-c6kfl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:31.467426 containerd[2037]: 2025-02-13 19:50:31.391 [INFO][5570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali196cf0e0ac1 ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-c6kfl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:31.467426 containerd[2037]: 2025-02-13 19:50:31.419 [INFO][5570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-c6kfl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:31.467426 containerd[2037]: 2025-02-13 19:50:31.422 [INFO][5570] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-c6kfl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0", GenerateName:"calico-apiserver-899bfd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94b6c05-9941-4d00-9fa6-b2a1c452394b", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899bfd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52", Pod:"calico-apiserver-899bfd54-c6kfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali196cf0e0ac1", MAC:"ae:1b:46:f8:c2:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:31.467426 containerd[2037]: 2025-02-13 19:50:31.446 [INFO][5570] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52" Namespace="calico-apiserver" Pod="calico-apiserver-899bfd54-c6kfl" WorkloadEndpoint="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:31.497486 systemd-networkd[1600]: calibe9a11802b3: Gained IPv6LL
Feb 13 19:50:31.544843 containerd[2037]: time="2025-02-13T19:50:31.544046656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:50:31.544843 containerd[2037]: time="2025-02-13T19:50:31.544145512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:50:31.544843 containerd[2037]: time="2025-02-13T19:50:31.544182304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:31.544843 containerd[2037]: time="2025-02-13T19:50:31.544348636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:50:31.593077 containerd[2037]: time="2025-02-13T19:50:31.591995764Z" level=info msg="StartContainer for \"5adb796650391562cdbf9bb38ef199e55128d1c0f5ea75f85d73b218162fa811\" returns successfully"
Feb 13 19:50:31.704191 containerd[2037]: time="2025-02-13T19:50:31.702240893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-899bfd54-c6kfl,Uid:d94b6c05-9941-4d00-9fa6-b2a1c452394b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52\""
Feb 13 19:50:31.812683 kubelet[3647]: I0213 19:50:31.812181    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hjblt" podStartSLOduration=37.812159465 podStartE2EDuration="37.812159465s" podCreationTimestamp="2025-02-13 19:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:31.811532381 +0000 UTC m=+50.893740361" watchObservedRunningTime="2025-02-13 19:50:31.812159465 +0000 UTC m=+50.894367421"
Feb 13 19:50:32.071306 systemd-networkd[1600]: cali46c1bdbb3cd: Gained IPv6LL
Feb 13 19:50:32.135399 systemd-networkd[1600]: calia1b3310afe0: Gained IPv6LL
Feb 13 19:50:32.462911 kubelet[3647]: I0213 19:50:32.460706    3647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 19:50:33.159429 systemd-networkd[1600]: cali196cf0e0ac1: Gained IPv6LL
Feb 13 19:50:33.726535 containerd[2037]: time="2025-02-13T19:50:33.726480475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:33.729337 containerd[2037]: time="2025-02-13T19:50:33.729254767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409"
Feb 13 19:50:33.730001 containerd[2037]: time="2025-02-13T19:50:33.729656983Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:33.735710 containerd[2037]: time="2025-02-13T19:50:33.735657007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:33.737348 containerd[2037]: time="2025-02-13T19:50:33.737089291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.42124972s"
Feb 13 19:50:33.737348 containerd[2037]: time="2025-02-13T19:50:33.737148079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\""
Feb 13 19:50:33.740708 containerd[2037]: time="2025-02-13T19:50:33.740399923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\""
Feb 13 19:50:33.743443 containerd[2037]: time="2025-02-13T19:50:33.743340559Z" level=info msg="CreateContainer within sandbox \"ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Feb 13 19:50:33.766845 containerd[2037]: time="2025-02-13T19:50:33.766783495Z" level=info msg="CreateContainer within sandbox \"ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c56b3d5ad5954d637a4f5dee61b617db7223fa3ce48816be3b351848e4957ae9\""
Feb 13 19:50:33.768136 containerd[2037]: time="2025-02-13T19:50:33.767798743Z" level=info msg="StartContainer for \"c56b3d5ad5954d637a4f5dee61b617db7223fa3ce48816be3b351848e4957ae9\""
Feb 13 19:50:33.845333 systemd[1]: run-containerd-runc-k8s.io-c56b3d5ad5954d637a4f5dee61b617db7223fa3ce48816be3b351848e4957ae9-runc.iSKyJ8.mount: Deactivated successfully.
Feb 13 19:50:33.911829 containerd[2037]: time="2025-02-13T19:50:33.910635488Z" level=info msg="StartContainer for \"c56b3d5ad5954d637a4f5dee61b617db7223fa3ce48816be3b351848e4957ae9\" returns successfully"
Feb 13 19:50:34.473326 systemd[1]: Started sshd@8-172.31.22.232:22-139.178.89.65:59892.service - OpenSSH per-connection server daemon (139.178.89.65:59892).
Feb 13 19:50:34.665703 sshd[5793]: Accepted publickey for core from 139.178.89.65 port 59892 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:50:34.669648 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:50:34.679005 systemd-logind[2018]: New session 9 of user core.
Feb 13 19:50:34.689535 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 19:50:34.859214 kubelet[3647]: I0213 19:50:34.858903    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-899bfd54-5wp2p" podStartSLOduration=27.88004631 podStartE2EDuration="30.858879897s" podCreationTimestamp="2025-02-13 19:50:04 +0000 UTC" firstStartedPulling="2025-02-13 19:50:30.7599742 +0000 UTC m=+49.842182144" lastFinishedPulling="2025-02-13 19:50:33.738807775 +0000 UTC m=+52.821015731" observedRunningTime="2025-02-13 19:50:34.854128149 +0000 UTC m=+53.936336129" watchObservedRunningTime="2025-02-13 19:50:34.858879897 +0000 UTC m=+53.941087865"
Feb 13 19:50:35.207509 sshd[5793]: pam_unix(sshd:session): session closed for user core
Feb 13 19:50:35.224859 systemd[1]: sshd@8-172.31.22.232:22-139.178.89.65:59892.service: Deactivated successfully.
Feb 13 19:50:35.246573 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 19:50:35.264115 systemd-logind[2018]: Session 9 logged out. Waiting for processes to exit.
Feb 13 19:50:35.270290 systemd-logind[2018]: Removed session 9.
Feb 13 19:50:35.431378 ntpd[1997]: Listen normally on 6 vxlan.calico 192.168.42.64:123
Feb 13 19:50:35.432707 ntpd[1997]: 13 Feb 19:50:35 ntpd[1997]: Listen normally on 6 vxlan.calico 192.168.42.64:123
Feb 13 19:50:35.432707 ntpd[1997]: 13 Feb 19:50:35 ntpd[1997]: Listen normally on 7 vxlan.calico [fe80::64d3:fbff:fe0d:4cdb%4]:123
Feb 13 19:50:35.431889 ntpd[1997]: Listen normally on 7 vxlan.calico [fe80::64d3:fbff:fe0d:4cdb%4]:123
Feb 13 19:50:35.434540 ntpd[1997]: Listen normally on 8 cali97ff637fe97 [fe80::ecee:eeff:feee:eeee%7]:123
Feb 13 19:50:35.436487 ntpd[1997]: 13 Feb 19:50:35 ntpd[1997]: Listen normally on 8 cali97ff637fe97 [fe80::ecee:eeff:feee:eeee%7]:123
Feb 13 19:50:35.436487 ntpd[1997]: 13 Feb 19:50:35 ntpd[1997]: Listen normally on 9 cali804f8faff4e [fe80::ecee:eeff:feee:eeee%8]:123
Feb 13 19:50:35.436487 ntpd[1997]: 13 Feb 19:50:35 ntpd[1997]: Listen normally on 10 cali46c1bdbb3cd [fe80::ecee:eeff:feee:eeee%9]:123
Feb 13 19:50:35.436487 ntpd[1997]: 13 Feb 19:50:35 ntpd[1997]: Listen normally on 11 calia1b3310afe0 [fe80::ecee:eeff:feee:eeee%10]:123
Feb 13 19:50:35.436487 ntpd[1997]: 13 Feb 19:50:35 ntpd[1997]: Listen normally on 12 calibe9a11802b3 [fe80::ecee:eeff:feee:eeee%11]:123
Feb 13 19:50:35.436487 ntpd[1997]: 13 Feb 19:50:35 ntpd[1997]: Listen normally on 13 cali196cf0e0ac1 [fe80::ecee:eeff:feee:eeee%12]:123
Feb 13 19:50:35.434809 ntpd[1997]: Listen normally on 9 cali804f8faff4e [fe80::ecee:eeff:feee:eeee%8]:123
Feb 13 19:50:35.434931 ntpd[1997]: Listen normally on 10 cali46c1bdbb3cd [fe80::ecee:eeff:feee:eeee%9]:123
Feb 13 19:50:35.434999 ntpd[1997]: Listen normally on 11 calia1b3310afe0 [fe80::ecee:eeff:feee:eeee%10]:123
Feb 13 19:50:35.435102 ntpd[1997]: Listen normally on 12 calibe9a11802b3 [fe80::ecee:eeff:feee:eeee%11]:123
Feb 13 19:50:35.435176 ntpd[1997]: Listen normally on 13 cali196cf0e0ac1 [fe80::ecee:eeff:feee:eeee%12]:123
Feb 13 19:50:35.814535 kubelet[3647]: I0213 19:50:35.814477    3647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 19:50:36.728286 containerd[2037]: time="2025-02-13T19:50:36.728188738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:36.729886 containerd[2037]: time="2025-02-13T19:50:36.729814990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828"
Feb 13 19:50:36.731103 containerd[2037]: time="2025-02-13T19:50:36.730995634Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:36.735080 containerd[2037]: time="2025-02-13T19:50:36.734859730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:36.736547 containerd[2037]: time="2025-02-13T19:50:36.736359478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.995898643s"
Feb 13 19:50:36.736547 containerd[2037]: time="2025-02-13T19:50:36.736413274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\""
Feb 13 19:50:36.741835 containerd[2037]: time="2025-02-13T19:50:36.739835350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Feb 13 19:50:36.774072 containerd[2037]: time="2025-02-13T19:50:36.773440078Z" level=info msg="CreateContainer within sandbox \"2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Feb 13 19:50:36.795165 containerd[2037]: time="2025-02-13T19:50:36.794599978Z" level=info msg="CreateContainer within sandbox \"2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ad34421a170ef7dffe4bf3430a4d13c5ea70bc565e075ff79ef1e88d8fb2cea2\""
Feb 13 19:50:36.800218 containerd[2037]: time="2025-02-13T19:50:36.799357882Z" level=info msg="StartContainer for \"ad34421a170ef7dffe4bf3430a4d13c5ea70bc565e075ff79ef1e88d8fb2cea2\""
Feb 13 19:50:36.941704 containerd[2037]: time="2025-02-13T19:50:36.940665719Z" level=info msg="StartContainer for \"ad34421a170ef7dffe4bf3430a4d13c5ea70bc565e075ff79ef1e88d8fb2cea2\" returns successfully"
Feb 13 19:50:37.865786 kubelet[3647]: I0213 19:50:37.865673    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78f4c55485-sms9j" podStartSLOduration=28.020271314 podStartE2EDuration="33.865598879s" podCreationTimestamp="2025-02-13 19:50:04 +0000 UTC" firstStartedPulling="2025-02-13 19:50:30.893308133 +0000 UTC m=+49.975516089" lastFinishedPulling="2025-02-13 19:50:36.73863571 +0000 UTC m=+55.820843654" observedRunningTime="2025-02-13 19:50:37.863833931 +0000 UTC m=+56.946041923" watchObservedRunningTime="2025-02-13 19:50:37.865598879 +0000 UTC m=+56.947806859"
Feb 13 19:50:38.334981 containerd[2037]: time="2025-02-13T19:50:38.334926034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:38.336874 containerd[2037]: time="2025-02-13T19:50:38.336789730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368"
Feb 13 19:50:38.337725 containerd[2037]: time="2025-02-13T19:50:38.337006966Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:38.341466 containerd[2037]: time="2025-02-13T19:50:38.341386726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:38.343908 containerd[2037]: time="2025-02-13T19:50:38.343845442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.603921988s"
Feb 13 19:50:38.344169 containerd[2037]: time="2025-02-13T19:50:38.344134966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\""
Feb 13 19:50:38.347485 containerd[2037]: time="2025-02-13T19:50:38.347421442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Feb 13 19:50:38.356015 containerd[2037]: time="2025-02-13T19:50:38.355925662Z" level=info msg="CreateContainer within sandbox \"ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Feb 13 19:50:38.379557 containerd[2037]: time="2025-02-13T19:50:38.379473010Z" level=info msg="CreateContainer within sandbox \"ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1a28ac7a058b1c51a9bc976bee96fc3c061126eed12ab7597a078487ac6a9646\""
Feb 13 19:50:38.384268 containerd[2037]: time="2025-02-13T19:50:38.383363410Z" level=info msg="StartContainer for \"1a28ac7a058b1c51a9bc976bee96fc3c061126eed12ab7597a078487ac6a9646\""
Feb 13 19:50:38.526962 containerd[2037]: time="2025-02-13T19:50:38.526904783Z" level=info msg="StartContainer for \"1a28ac7a058b1c51a9bc976bee96fc3c061126eed12ab7597a078487ac6a9646\" returns successfully"
Feb 13 19:50:38.686745 containerd[2037]: time="2025-02-13T19:50:38.686599488Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 19:50:38.688106 containerd[2037]: time="2025-02-13T19:50:38.687997848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77"
Feb 13 19:50:38.693339 containerd[2037]: time="2025-02-13T19:50:38.693278292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 345.79223ms"
Feb 13 19:50:38.693654 containerd[2037]: time="2025-02-13T19:50:38.693342264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\""
Feb 13 19:50:38.697940 containerd[2037]: time="2025-02-13T19:50:38.697759044Z" level=info msg="CreateContainer within sandbox \"9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Feb 13 19:50:38.715684 containerd[2037]: time="2025-02-13T19:50:38.715551252Z" level=info msg="CreateContainer within sandbox \"9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1b6189bab73c29b957e2dfa7637c8d4238bd04324609bc20746b832950990bb\""
Feb 13 19:50:38.719296 containerd[2037]: time="2025-02-13T19:50:38.716258172Z" level=info msg="StartContainer for \"c1b6189bab73c29b957e2dfa7637c8d4238bd04324609bc20746b832950990bb\""
Feb 13 19:50:38.785981 systemd[1]: run-containerd-runc-k8s.io-c1b6189bab73c29b957e2dfa7637c8d4238bd04324609bc20746b832950990bb-runc.26bQe2.mount: Deactivated successfully.
Feb 13 19:50:38.902536 containerd[2037]: time="2025-02-13T19:50:38.902395561Z" level=info msg="StartContainer for \"c1b6189bab73c29b957e2dfa7637c8d4238bd04324609bc20746b832950990bb\" returns successfully"
Feb 13 19:50:39.386511 kubelet[3647]: I0213 19:50:39.386335    3647 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Feb 13 19:50:39.386511 kubelet[3647]: I0213 19:50:39.386379    3647 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Feb 13 19:50:39.898628 kubelet[3647]: I0213 19:50:39.897610    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-899bfd54-c6kfl" podStartSLOduration=28.910880947 podStartE2EDuration="35.897564458s" podCreationTimestamp="2025-02-13 19:50:04 +0000 UTC" firstStartedPulling="2025-02-13 19:50:31.707661917 +0000 UTC m=+50.789869873" lastFinishedPulling="2025-02-13 19:50:38.694345404 +0000 UTC m=+57.776553384" observedRunningTime="2025-02-13 19:50:39.89496035 +0000 UTC m=+58.977168330" watchObservedRunningTime="2025-02-13 19:50:39.897564458 +0000 UTC m=+58.979772438"
Feb 13 19:50:39.901359 kubelet[3647]: I0213 19:50:39.901206    3647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k6np6" podStartSLOduration=27.597119572 podStartE2EDuration="36.901157738s" podCreationTimestamp="2025-02-13 19:50:03 +0000 UTC" firstStartedPulling="2025-02-13 19:50:29.042516084 +0000 UTC m=+48.124724040" lastFinishedPulling="2025-02-13 19:50:38.346554166 +0000 UTC m=+57.428762206" observedRunningTime="2025-02-13 19:50:38.891850525 +0000 UTC m=+57.974058505" watchObservedRunningTime="2025-02-13 19:50:39.901157738 +0000 UTC m=+58.983365790"
Feb 13 19:50:40.234012 systemd[1]: Started sshd@9-172.31.22.232:22-139.178.89.65:51706.service - OpenSSH per-connection server daemon (139.178.89.65:51706).
Feb 13 19:50:40.419683 sshd[5958]: Accepted publickey for core from 139.178.89.65 port 51706 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:50:40.425105 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:50:40.434978 systemd-logind[2018]: New session 10 of user core.
Feb 13 19:50:40.442669 systemd[1]: Started session-10.scope - Session 10 of User core.
Feb 13 19:50:40.719650 sshd[5958]: pam_unix(sshd:session): session closed for user core
Feb 13 19:50:40.726567 systemd-logind[2018]: Session 10 logged out. Waiting for processes to exit.
Feb 13 19:50:40.728520 systemd[1]: sshd@9-172.31.22.232:22-139.178.89.65:51706.service: Deactivated successfully.
Feb 13 19:50:40.734625 systemd[1]: session-10.scope: Deactivated successfully.
Feb 13 19:50:40.738667 systemd-logind[2018]: Removed session 10.
Feb 13 19:50:40.750515 systemd[1]: Started sshd@10-172.31.22.232:22-139.178.89.65:51708.service - OpenSSH per-connection server daemon (139.178.89.65:51708).
Feb 13 19:50:40.873756 kubelet[3647]: I0213 19:50:40.873698    3647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 19:50:40.938878 sshd[5973]: Accepted publickey for core from 139.178.89.65 port 51708 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:50:40.941795 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:50:40.951093 systemd-logind[2018]: New session 11 of user core.
Feb 13 19:50:40.961524 systemd[1]: Started session-11.scope - Session 11 of User core.
Feb 13 19:50:41.148556 containerd[2037]: time="2025-02-13T19:50:41.148470948Z" level=info msg="StopPodSandbox for \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\""
Feb 13 19:50:41.342903 sshd[5973]: pam_unix(sshd:session): session closed for user core
Feb 13 19:50:41.367246 systemd[1]: sshd@10-172.31.22.232:22-139.178.89.65:51708.service: Deactivated successfully.
Feb 13 19:50:41.383985 systemd[1]: session-11.scope: Deactivated successfully.
Feb 13 19:50:41.398707 systemd-logind[2018]: Session 11 logged out. Waiting for processes to exit.
Feb 13 19:50:41.415519 systemd[1]: Started sshd@11-172.31.22.232:22-139.178.89.65:51712.service - OpenSSH per-connection server daemon (139.178.89.65:51712).
Feb 13 19:50:41.423816 systemd-logind[2018]: Removed session 11.
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.261 [WARNING][5994] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8a64d6ad-b799-442a-9fba-40d222a33c18", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e", Pod:"coredns-7db6d8ff4d-hjblt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe9a11802b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.261 [INFO][5994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.261 [INFO][5994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" iface="eth0" netns=""
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.261 [INFO][5994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.262 [INFO][5994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.332 [INFO][6003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" HandleID="k8s-pod-network.13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.335 [INFO][6003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.335 [INFO][6003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.412 [WARNING][6003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" HandleID="k8s-pod-network.13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.412 [INFO][6003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" HandleID="k8s-pod-network.13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.420 [INFO][6003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:41.442332 containerd[2037]: 2025-02-13 19:50:41.431 [INFO][5994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:41.444484 containerd[2037]: time="2025-02-13T19:50:41.443180065Z" level=info msg="TearDown network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\" successfully"
Feb 13 19:50:41.444484 containerd[2037]: time="2025-02-13T19:50:41.443222173Z" level=info msg="StopPodSandbox for \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\" returns successfully"
Feb 13 19:50:41.445536 containerd[2037]: time="2025-02-13T19:50:41.445011997Z" level=info msg="RemovePodSandbox for \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\""
Feb 13 19:50:41.445536 containerd[2037]: time="2025-02-13T19:50:41.445176445Z" level=info msg="Forcibly stopping sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\""
Feb 13 19:50:41.636312 sshd[6015]: Accepted publickey for core from 139.178.89.65 port 51712 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:50:41.641612 sshd[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.528 [WARNING][6028] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8a64d6ad-b799-442a-9fba-40d222a33c18", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"ebeddce3c6db36e2118e8c781d32c1e7ae6516402675204ee3f53ebc1199cf6e", Pod:"coredns-7db6d8ff4d-hjblt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe9a11802b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.532 [INFO][6028] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.532 [INFO][6028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" iface="eth0" netns=""
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.532 [INFO][6028] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.532 [INFO][6028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.617 [INFO][6036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" HandleID="k8s-pod-network.13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.618 [INFO][6036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.618 [INFO][6036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.637 [WARNING][6036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" HandleID="k8s-pod-network.13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.638 [INFO][6036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" HandleID="k8s-pod-network.13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--hjblt-eth0"
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.640 [INFO][6036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:41.647871 containerd[2037]: 2025-02-13 19:50:41.643 [INFO][6028] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90"
Feb 13 19:50:41.648714 containerd[2037]: time="2025-02-13T19:50:41.647952542Z" level=info msg="TearDown network for sandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\" successfully"
Feb 13 19:50:41.655249 containerd[2037]: time="2025-02-13T19:50:41.653925866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 19:50:41.656105 containerd[2037]: time="2025-02-13T19:50:41.655105802Z" level=info msg="RemovePodSandbox \"13fb49e0ee9b4b0b57e809bb22b271849aa5966d2165d686b73006a542297f90\" returns successfully"
Feb 13 19:50:41.657145 systemd-logind[2018]: New session 12 of user core.
Feb 13 19:50:41.659017 containerd[2037]: time="2025-02-13T19:50:41.658201778Z" level=info msg="StopPodSandbox for \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\""
Feb 13 19:50:41.663408 systemd[1]: Started session-12.scope - Session 12 of User core.
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.734 [WARNING][6059] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0", GenerateName:"calico-apiserver-899bfd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94b6c05-9941-4d00-9fa6-b2a1c452394b", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899bfd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52", Pod:"calico-apiserver-899bfd54-c6kfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali196cf0e0ac1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.735 [INFO][6059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.735 [INFO][6059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" iface="eth0" netns=""
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.735 [INFO][6059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.735 [INFO][6059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.791 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" HandleID="k8s-pod-network.7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.792 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.792 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.813 [WARNING][6066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" HandleID="k8s-pod-network.7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.813 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" HandleID="k8s-pod-network.7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.815 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:41.833123 containerd[2037]: 2025-02-13 19:50:41.827 [INFO][6059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:41.833123 containerd[2037]: time="2025-02-13T19:50:41.832580583Z" level=info msg="TearDown network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\" successfully"
Feb 13 19:50:41.833123 containerd[2037]: time="2025-02-13T19:50:41.832620351Z" level=info msg="StopPodSandbox for \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\" returns successfully"
Feb 13 19:50:41.835736 containerd[2037]: time="2025-02-13T19:50:41.834533343Z" level=info msg="RemovePodSandbox for \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\""
Feb 13 19:50:41.835736 containerd[2037]: time="2025-02-13T19:50:41.834693375Z" level=info msg="Forcibly stopping sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\""
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:41.950 [WARNING][6092] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0", GenerateName:"calico-apiserver-899bfd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94b6c05-9941-4d00-9fa6-b2a1c452394b", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899bfd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"9711b2830dfb85af28a2f7b18de5c3fca9781cf42416ab085bec023c9396be52", Pod:"calico-apiserver-899bfd54-c6kfl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali196cf0e0ac1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:41.951 [INFO][6092] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:41.951 [INFO][6092] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" iface="eth0" netns=""
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:41.951 [INFO][6092] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:41.951 [INFO][6092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:41.994 [INFO][6098] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" HandleID="k8s-pod-network.7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:41.994 [INFO][6098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:41.994 [INFO][6098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:42.009 [WARNING][6098] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" HandleID="k8s-pod-network.7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:42.010 [INFO][6098] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" HandleID="k8s-pod-network.7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--c6kfl-eth0"
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:42.015 [INFO][6098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:42.023276 containerd[2037]: 2025-02-13 19:50:42.019 [INFO][6092] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f"
Feb 13 19:50:42.024876 containerd[2037]: time="2025-02-13T19:50:42.024297720Z" level=info msg="TearDown network for sandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\" successfully"
Feb 13 19:50:42.029738 containerd[2037]: time="2025-02-13T19:50:42.029624904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 19:50:42.029738 containerd[2037]: time="2025-02-13T19:50:42.029722608Z" level=info msg="RemovePodSandbox \"7b32c67a9938044bab9b6aeca6f558e1427419e8b6211bdcc884ff88c3856c1f\" returns successfully"
Feb 13 19:50:42.030417 containerd[2037]: time="2025-02-13T19:50:42.030368496Z" level=info msg="StopPodSandbox for \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\""
Feb 13 19:50:42.047336 sshd[6015]: pam_unix(sshd:session): session closed for user core
Feb 13 19:50:42.058391 systemd[1]: sshd@11-172.31.22.232:22-139.178.89.65:51712.service: Deactivated successfully.
Feb 13 19:50:42.069422 systemd[1]: session-12.scope: Deactivated successfully.
Feb 13 19:50:42.075107 systemd-logind[2018]: Session 12 logged out. Waiting for processes to exit.
Feb 13 19:50:42.078083 systemd-logind[2018]: Removed session 12.
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.113 [WARNING][6116] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0", GenerateName:"calico-apiserver-899bfd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b60d0c4-0403-4702-b85f-ae7526b9b83b", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899bfd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9", Pod:"calico-apiserver-899bfd54-5wp2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46c1bdbb3cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.114 [INFO][6116] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.114 [INFO][6116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" iface="eth0" netns=""
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.114 [INFO][6116] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.114 [INFO][6116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.156 [INFO][6125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" HandleID="k8s-pod-network.d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.156 [INFO][6125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.156 [INFO][6125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.174 [WARNING][6125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" HandleID="k8s-pod-network.d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.174 [INFO][6125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" HandleID="k8s-pod-network.d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.179 [INFO][6125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:42.183840 containerd[2037]: 2025-02-13 19:50:42.181 [INFO][6116] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:42.185651 containerd[2037]: time="2025-02-13T19:50:42.184645201Z" level=info msg="TearDown network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\" successfully"
Feb 13 19:50:42.185651 containerd[2037]: time="2025-02-13T19:50:42.184687573Z" level=info msg="StopPodSandbox for \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\" returns successfully"
Feb 13 19:50:42.185651 containerd[2037]: time="2025-02-13T19:50:42.185398477Z" level=info msg="RemovePodSandbox for \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\""
Feb 13 19:50:42.185651 containerd[2037]: time="2025-02-13T19:50:42.185443705Z" level=info msg="Forcibly stopping sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\""
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.267 [WARNING][6143] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0", GenerateName:"calico-apiserver-899bfd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b60d0c4-0403-4702-b85f-ae7526b9b83b", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"899bfd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"ff21ccb9b81e37ca2418ee449eda60e694129de05f06f8600859aa0a8866c7b9", Pod:"calico-apiserver-899bfd54-5wp2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46c1bdbb3cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.267 [INFO][6143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.267 [INFO][6143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" iface="eth0" netns=""
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.267 [INFO][6143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.267 [INFO][6143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.307 [INFO][6150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" HandleID="k8s-pod-network.d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.308 [INFO][6150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.308 [INFO][6150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.320 [WARNING][6150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" HandleID="k8s-pod-network.d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.320 [INFO][6150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" HandleID="k8s-pod-network.d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af" Workload="ip--172--31--22--232-k8s-calico--apiserver--899bfd54--5wp2p-eth0"
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.322 [INFO][6150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:42.328012 containerd[2037]: 2025-02-13 19:50:42.325 [INFO][6143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af"
Feb 13 19:50:42.329879 containerd[2037]: time="2025-02-13T19:50:42.328102910Z" level=info msg="TearDown network for sandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\" successfully"
Feb 13 19:50:42.332509 containerd[2037]: time="2025-02-13T19:50:42.332375270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 19:50:42.332666 containerd[2037]: time="2025-02-13T19:50:42.332599670Z" level=info msg="RemovePodSandbox \"d52f094034992fd85d65f1623663b3c452dff9e0e9925962f3c9451112c7e7af\" returns successfully"
Feb 13 19:50:42.333494 containerd[2037]: time="2025-02-13T19:50:42.333450842Z" level=info msg="StopPodSandbox for \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\""
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.395 [WARNING][6168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a53a8aa8-6a9b-4643-89f4-26162e962c9a", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c", Pod:"csi-node-driver-k6np6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali804f8faff4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.397 [INFO][6168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.397 [INFO][6168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" iface="eth0" netns=""
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.397 [INFO][6168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.397 [INFO][6168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.434 [INFO][6174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" HandleID="k8s-pod-network.a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.435 [INFO][6174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.435 [INFO][6174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.447 [WARNING][6174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" HandleID="k8s-pod-network.a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.447 [INFO][6174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" HandleID="k8s-pod-network.a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.449 [INFO][6174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:42.454076 containerd[2037]: 2025-02-13 19:50:42.451 [INFO][6168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:42.454926 containerd[2037]: time="2025-02-13T19:50:42.454011998Z" level=info msg="TearDown network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\" successfully"
Feb 13 19:50:42.455108 containerd[2037]: time="2025-02-13T19:50:42.454927010Z" level=info msg="StopPodSandbox for \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\" returns successfully"
Feb 13 19:50:42.456579 containerd[2037]: time="2025-02-13T19:50:42.456155438Z" level=info msg="RemovePodSandbox for \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\""
Feb 13 19:50:42.456579 containerd[2037]: time="2025-02-13T19:50:42.456208694Z" level=info msg="Forcibly stopping sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\""
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.522 [WARNING][6192] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a53a8aa8-6a9b-4643-89f4-26162e962c9a", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"ea18891c6886d27c92f5bbeba1b221fe6ad8aaf930cab81d2bbd4ea25ff7c08c", Pod:"csi-node-driver-k6np6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali804f8faff4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.522 [INFO][6192] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.522 [INFO][6192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" iface="eth0" netns=""
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.522 [INFO][6192] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.523 [INFO][6192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.559 [INFO][6199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" HandleID="k8s-pod-network.a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.559 [INFO][6199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.559 [INFO][6199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.572 [WARNING][6199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" HandleID="k8s-pod-network.a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.572 [INFO][6199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" HandleID="k8s-pod-network.a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d" Workload="ip--172--31--22--232-k8s-csi--node--driver--k6np6-eth0"
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.574 [INFO][6199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:42.579731 containerd[2037]: 2025-02-13 19:50:42.577 [INFO][6192] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d"
Feb 13 19:50:42.579731 containerd[2037]: time="2025-02-13T19:50:42.579686523Z" level=info msg="TearDown network for sandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\" successfully"
Feb 13 19:50:42.587134 containerd[2037]: time="2025-02-13T19:50:42.586989015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 19:50:42.587291 containerd[2037]: time="2025-02-13T19:50:42.587174343Z" level=info msg="RemovePodSandbox \"a968dd6d0b136cc34569db1485876e3b675255cb759edd1694825cf936c00d3d\" returns successfully"
Feb 13 19:50:42.587920 containerd[2037]: time="2025-02-13T19:50:42.587787015Z" level=info msg="StopPodSandbox for \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\""
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.651 [WARNING][6217] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0", GenerateName:"calico-kube-controllers-78f4c55485-", Namespace:"calico-system", SelfLink:"", UID:"91cd868f-3147-489a-9154-5e881b7a25ed", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f4c55485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9", Pod:"calico-kube-controllers-78f4c55485-sms9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1b3310afe0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.652 [INFO][6217] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.652 [INFO][6217] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" iface="eth0" netns=""
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.652 [INFO][6217] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.652 [INFO][6217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.687 [INFO][6223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" HandleID="k8s-pod-network.fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.688 [INFO][6223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.688 [INFO][6223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.702 [WARNING][6223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" HandleID="k8s-pod-network.fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.702 [INFO][6223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" HandleID="k8s-pod-network.fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.705 [INFO][6223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:42.709858 containerd[2037]: 2025-02-13 19:50:42.707 [INFO][6217] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:42.712128 containerd[2037]: time="2025-02-13T19:50:42.709912612Z" level=info msg="TearDown network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\" successfully"
Feb 13 19:50:42.712128 containerd[2037]: time="2025-02-13T19:50:42.709951336Z" level=info msg="StopPodSandbox for \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\" returns successfully"
Feb 13 19:50:42.712128 containerd[2037]: time="2025-02-13T19:50:42.711345076Z" level=info msg="RemovePodSandbox for \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\""
Feb 13 19:50:42.712128 containerd[2037]: time="2025-02-13T19:50:42.711398248Z" level=info msg="Forcibly stopping sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\""
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.776 [WARNING][6242] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0", GenerateName:"calico-kube-controllers-78f4c55485-", Namespace:"calico-system", SelfLink:"", UID:"91cd868f-3147-489a-9154-5e881b7a25ed", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f4c55485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"2f8d42232f46203e0a7fff0c28e5277e4455129f0f401e84d8d62ac9e96d18f9", Pod:"calico-kube-controllers-78f4c55485-sms9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1b3310afe0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.776 [INFO][6242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.776 [INFO][6242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" iface="eth0" netns=""
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.776 [INFO][6242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.776 [INFO][6242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.810 [INFO][6249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" HandleID="k8s-pod-network.fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.810 [INFO][6249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.810 [INFO][6249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.822 [WARNING][6249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" HandleID="k8s-pod-network.fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.822 [INFO][6249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" HandleID="k8s-pod-network.fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854" Workload="ip--172--31--22--232-k8s-calico--kube--controllers--78f4c55485--sms9j-eth0"
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.825 [INFO][6249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:42.829696 containerd[2037]: 2025-02-13 19:50:42.827 [INFO][6242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854"
Feb 13 19:50:42.830992 containerd[2037]: time="2025-02-13T19:50:42.829951300Z" level=info msg="TearDown network for sandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\" successfully"
Feb 13 19:50:42.836068 containerd[2037]: time="2025-02-13T19:50:42.835960828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 19:50:42.836368 containerd[2037]: time="2025-02-13T19:50:42.836089960Z" level=info msg="RemovePodSandbox \"fd0b91cd7fad8251253bec74f49dda09e21be404bf39401d07c04b46e62a7854\" returns successfully"
Feb 13 19:50:42.837531 containerd[2037]: time="2025-02-13T19:50:42.837003376Z" level=info msg="StopPodSandbox for \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\""
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.913 [WARNING][6267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"244383ed-be9f-4e55-9206-721fd35d9360", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125", Pod:"coredns-7db6d8ff4d-vbqfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97ff637fe97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.913 [INFO][6267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.913 [INFO][6267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" iface="eth0" netns=""
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.913 [INFO][6267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.913 [INFO][6267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.957 [INFO][6273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" HandleID="k8s-pod-network.5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.957 [INFO][6273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.957 [INFO][6273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.970 [WARNING][6273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" HandleID="k8s-pod-network.5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.970 [INFO][6273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" HandleID="k8s-pod-network.5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.974 [INFO][6273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:42.978739 containerd[2037]: 2025-02-13 19:50:42.976 [INFO][6267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:42.980228 containerd[2037]: time="2025-02-13T19:50:42.979733669Z" level=info msg="TearDown network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\" successfully"
Feb 13 19:50:42.980228 containerd[2037]: time="2025-02-13T19:50:42.979779689Z" level=info msg="StopPodSandbox for \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\" returns successfully"
Feb 13 19:50:42.980591 containerd[2037]: time="2025-02-13T19:50:42.980419049Z" level=info msg="RemovePodSandbox for \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\""
Feb 13 19:50:42.980591 containerd[2037]: time="2025-02-13T19:50:42.980475929Z" level=info msg="Forcibly stopping sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\""
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.040 [WARNING][6292] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"244383ed-be9f-4e55-9206-721fd35d9360", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-232", ContainerID:"4bd1040b43948892fe3f88657918a8b838a7433f7c5ef04d03afa2552dbb3125", Pod:"coredns-7db6d8ff4d-vbqfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97ff637fe97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.041 [INFO][6292] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.041 [INFO][6292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" iface="eth0" netns=""
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.041 [INFO][6292] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.041 [INFO][6292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.074 [INFO][6298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" HandleID="k8s-pod-network.5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.075 [INFO][6298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.075 [INFO][6298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.089 [WARNING][6298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" HandleID="k8s-pod-network.5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.089 [INFO][6298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" HandleID="k8s-pod-network.5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6" Workload="ip--172--31--22--232-k8s-coredns--7db6d8ff4d--vbqfz-eth0"
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.092 [INFO][6298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 19:50:43.097373 containerd[2037]: 2025-02-13 19:50:43.094 [INFO][6292] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6"
Feb 13 19:50:43.097373 containerd[2037]: time="2025-02-13T19:50:43.097348345Z" level=info msg="TearDown network for sandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\" successfully"
Feb 13 19:50:43.101554 containerd[2037]: time="2025-02-13T19:50:43.101447401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 19:50:43.101739 containerd[2037]: time="2025-02-13T19:50:43.101552701Z" level=info msg="RemovePodSandbox \"5f5eea883835eb07f4a88669b40fb791c74d352125f4a3207b655eded75706d6\" returns successfully"
Feb 13 19:50:47.078669 systemd[1]: Started sshd@12-172.31.22.232:22-139.178.89.65:58472.service - OpenSSH per-connection server daemon (139.178.89.65:58472).
Feb 13 19:50:47.260882 sshd[6331]: Accepted publickey for core from 139.178.89.65 port 58472 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:50:47.263752 sshd[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:50:47.271253 systemd-logind[2018]: New session 13 of user core.
Feb 13 19:50:47.279637 systemd[1]: Started session-13.scope - Session 13 of User core.
Feb 13 19:50:47.532268 sshd[6331]: pam_unix(sshd:session): session closed for user core
Feb 13 19:50:47.540081 systemd[1]: sshd@12-172.31.22.232:22-139.178.89.65:58472.service: Deactivated successfully.
Feb 13 19:50:47.540679 systemd-logind[2018]: Session 13 logged out. Waiting for processes to exit.
Feb 13 19:50:47.548632 systemd[1]: session-13.scope: Deactivated successfully.
Feb 13 19:50:47.551623 systemd-logind[2018]: Removed session 13.
Feb 13 19:50:52.567574 systemd[1]: Started sshd@13-172.31.22.232:22-139.178.89.65:58484.service - OpenSSH per-connection server daemon (139.178.89.65:58484).
Feb 13 19:50:52.736680 sshd[6355]: Accepted publickey for core from 139.178.89.65 port 58484 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:50:52.739494 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:50:52.747566 systemd-logind[2018]: New session 14 of user core.
Feb 13 19:50:52.755526 systemd[1]: Started session-14.scope - Session 14 of User core.
Feb 13 19:50:52.996375 sshd[6355]: pam_unix(sshd:session): session closed for user core
Feb 13 19:50:53.004777 systemd[1]: sshd@13-172.31.22.232:22-139.178.89.65:58484.service: Deactivated successfully.
Feb 13 19:50:53.010453 systemd[1]: session-14.scope: Deactivated successfully.
Feb 13 19:50:53.010995 systemd-logind[2018]: Session 14 logged out. Waiting for processes to exit.
Feb 13 19:50:53.015274 systemd-logind[2018]: Removed session 14.
Feb 13 19:50:58.030245 systemd[1]: Started sshd@14-172.31.22.232:22-139.178.89.65:56760.service - OpenSSH per-connection server daemon (139.178.89.65:56760).
Feb 13 19:50:58.203749 sshd[6390]: Accepted publickey for core from 139.178.89.65 port 56760 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:50:58.207130 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:50:58.215845 systemd-logind[2018]: New session 15 of user core.
Feb 13 19:50:58.224006 systemd[1]: Started session-15.scope - Session 15 of User core.
Feb 13 19:50:58.471359 sshd[6390]: pam_unix(sshd:session): session closed for user core
Feb 13 19:50:58.482961 systemd[1]: sshd@14-172.31.22.232:22-139.178.89.65:56760.service: Deactivated successfully.
Feb 13 19:50:58.493733 systemd[1]: session-15.scope: Deactivated successfully.
Feb 13 19:50:58.495568 systemd-logind[2018]: Session 15 logged out. Waiting for processes to exit.
Feb 13 19:50:58.497664 systemd-logind[2018]: Removed session 15.
Feb 13 19:51:03.504584 systemd[1]: Started sshd@15-172.31.22.232:22-139.178.89.65:56774.service - OpenSSH per-connection server daemon (139.178.89.65:56774).
Feb 13 19:51:03.681666 sshd[6428]: Accepted publickey for core from 139.178.89.65 port 56774 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:03.683573 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:03.692594 systemd-logind[2018]: New session 16 of user core.
Feb 13 19:51:03.698615 systemd[1]: Started session-16.scope - Session 16 of User core.
Feb 13 19:51:03.935734 sshd[6428]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:03.943661 systemd[1]: sshd@15-172.31.22.232:22-139.178.89.65:56774.service: Deactivated successfully.
Feb 13 19:51:03.950225 systemd-logind[2018]: Session 16 logged out. Waiting for processes to exit.
Feb 13 19:51:03.950915 systemd[1]: session-16.scope: Deactivated successfully.
Feb 13 19:51:03.953881 systemd-logind[2018]: Removed session 16.
Feb 13 19:51:03.967585 systemd[1]: Started sshd@16-172.31.22.232:22-139.178.89.65:56788.service - OpenSSH per-connection server daemon (139.178.89.65:56788).
Feb 13 19:51:04.149475 sshd[6442]: Accepted publickey for core from 139.178.89.65 port 56788 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:04.152313 sshd[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:04.159778 systemd-logind[2018]: New session 17 of user core.
Feb 13 19:51:04.169106 systemd[1]: Started session-17.scope - Session 17 of User core.
Feb 13 19:51:04.862505 sshd[6442]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:04.872292 systemd[1]: sshd@16-172.31.22.232:22-139.178.89.65:56788.service: Deactivated successfully.
Feb 13 19:51:04.883478 systemd-logind[2018]: Session 17 logged out. Waiting for processes to exit.
Feb 13 19:51:04.884142 systemd[1]: session-17.scope: Deactivated successfully.
Feb 13 19:51:04.912973 systemd[1]: Started sshd@17-172.31.22.232:22-139.178.89.65:42054.service - OpenSSH per-connection server daemon (139.178.89.65:42054).
Feb 13 19:51:04.917613 systemd-logind[2018]: Removed session 17.
Feb 13 19:51:05.106661 sshd[6454]: Accepted publickey for core from 139.178.89.65 port 42054 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:05.109301 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:05.132153 systemd-logind[2018]: New session 18 of user core.
Feb 13 19:51:05.143889 systemd[1]: Started session-18.scope - Session 18 of User core.
Feb 13 19:51:06.054504 kubelet[3647]: I0213 19:51:06.054417    3647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 19:51:08.297335 kubelet[3647]: I0213 19:51:08.296528    3647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 19:51:08.965530 sshd[6454]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:08.978485 systemd[1]: sshd@17-172.31.22.232:22-139.178.89.65:42054.service: Deactivated successfully.
Feb 13 19:51:09.008293 systemd-logind[2018]: Session 18 logged out. Waiting for processes to exit.
Feb 13 19:51:09.013935 systemd[1]: session-18.scope: Deactivated successfully.
Feb 13 19:51:09.025945 systemd[1]: Started sshd@18-172.31.22.232:22-139.178.89.65:42062.service - OpenSSH per-connection server daemon (139.178.89.65:42062).
Feb 13 19:51:09.030794 systemd-logind[2018]: Removed session 18.
Feb 13 19:51:09.231654 sshd[6481]: Accepted publickey for core from 139.178.89.65 port 42062 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:09.238224 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:09.257439 systemd-logind[2018]: New session 19 of user core.
Feb 13 19:51:09.266528 systemd[1]: Started session-19.scope - Session 19 of User core.
Feb 13 19:51:09.839810 sshd[6481]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:09.847656 systemd[1]: sshd@18-172.31.22.232:22-139.178.89.65:42062.service: Deactivated successfully.
Feb 13 19:51:09.855481 systemd[1]: session-19.scope: Deactivated successfully.
Feb 13 19:51:09.856147 systemd-logind[2018]: Session 19 logged out. Waiting for processes to exit.
Feb 13 19:51:09.861871 systemd-logind[2018]: Removed session 19.
Feb 13 19:51:09.885375 systemd[1]: Started sshd@19-172.31.22.232:22-139.178.89.65:42070.service - OpenSSH per-connection server daemon (139.178.89.65:42070).
Feb 13 19:51:10.056675 sshd[6493]: Accepted publickey for core from 139.178.89.65 port 42070 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:10.059721 sshd[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:10.072143 systemd-logind[2018]: New session 20 of user core.
Feb 13 19:51:10.079658 systemd[1]: Started session-20.scope - Session 20 of User core.
Feb 13 19:51:10.334700 sshd[6493]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:10.340928 systemd[1]: sshd@19-172.31.22.232:22-139.178.89.65:42070.service: Deactivated successfully.
Feb 13 19:51:10.349303 systemd[1]: session-20.scope: Deactivated successfully.
Feb 13 19:51:10.355565 systemd-logind[2018]: Session 20 logged out. Waiting for processes to exit.
Feb 13 19:51:10.357483 systemd-logind[2018]: Removed session 20.
Feb 13 19:51:15.368667 systemd[1]: Started sshd@20-172.31.22.232:22-139.178.89.65:57008.service - OpenSSH per-connection server daemon (139.178.89.65:57008).
Feb 13 19:51:15.578099 sshd[6530]: Accepted publickey for core from 139.178.89.65 port 57008 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:15.581646 sshd[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:15.591176 systemd-logind[2018]: New session 21 of user core.
Feb 13 19:51:15.602689 systemd[1]: Started session-21.scope - Session 21 of User core.
Feb 13 19:51:15.900696 sshd[6530]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:15.908972 systemd[1]: sshd@20-172.31.22.232:22-139.178.89.65:57008.service: Deactivated successfully.
Feb 13 19:51:15.920571 systemd[1]: session-21.scope: Deactivated successfully.
Feb 13 19:51:15.922850 systemd-logind[2018]: Session 21 logged out. Waiting for processes to exit.
Feb 13 19:51:15.925238 systemd-logind[2018]: Removed session 21.
Feb 13 19:51:20.932538 systemd[1]: Started sshd@21-172.31.22.232:22-139.178.89.65:57014.service - OpenSSH per-connection server daemon (139.178.89.65:57014).
Feb 13 19:51:21.117447 sshd[6547]: Accepted publickey for core from 139.178.89.65 port 57014 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:21.120156 sshd[6547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:21.127776 systemd-logind[2018]: New session 22 of user core.
Feb 13 19:51:21.142620 systemd[1]: Started session-22.scope - Session 22 of User core.
Feb 13 19:51:21.384348 sshd[6547]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:21.393907 systemd[1]: sshd@21-172.31.22.232:22-139.178.89.65:57014.service: Deactivated successfully.
Feb 13 19:51:21.404746 systemd[1]: session-22.scope: Deactivated successfully.
Feb 13 19:51:21.405208 systemd-logind[2018]: Session 22 logged out. Waiting for processes to exit.
Feb 13 19:51:21.411298 systemd-logind[2018]: Removed session 22.
Feb 13 19:51:26.413520 systemd[1]: Started sshd@22-172.31.22.232:22-139.178.89.65:53276.service - OpenSSH per-connection server daemon (139.178.89.65:53276).
Feb 13 19:51:26.591610 sshd[6563]: Accepted publickey for core from 139.178.89.65 port 53276 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:26.594371 sshd[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:26.602825 systemd-logind[2018]: New session 23 of user core.
Feb 13 19:51:26.609700 systemd[1]: Started session-23.scope - Session 23 of User core.
Feb 13 19:51:26.848379 sshd[6563]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:26.854105 systemd[1]: sshd@22-172.31.22.232:22-139.178.89.65:53276.service: Deactivated successfully.
Feb 13 19:51:26.860101 systemd-logind[2018]: Session 23 logged out. Waiting for processes to exit.
Feb 13 19:51:26.864264 systemd[1]: session-23.scope: Deactivated successfully.
Feb 13 19:51:26.867265 systemd-logind[2018]: Removed session 23.
Feb 13 19:51:31.882659 systemd[1]: Started sshd@23-172.31.22.232:22-139.178.89.65:53284.service - OpenSSH per-connection server daemon (139.178.89.65:53284).
Feb 13 19:51:32.055012 sshd[6577]: Accepted publickey for core from 139.178.89.65 port 53284 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:32.057695 sshd[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:32.066470 systemd-logind[2018]: New session 24 of user core.
Feb 13 19:51:32.071731 systemd[1]: Started session-24.scope - Session 24 of User core.
Feb 13 19:51:32.337856 sshd[6577]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:32.349447 systemd[1]: sshd@23-172.31.22.232:22-139.178.89.65:53284.service: Deactivated successfully.
Feb 13 19:51:32.357742 systemd[1]: session-24.scope: Deactivated successfully.
Feb 13 19:51:32.359539 systemd-logind[2018]: Session 24 logged out. Waiting for processes to exit.
Feb 13 19:51:32.361628 systemd-logind[2018]: Removed session 24.
Feb 13 19:51:37.377466 systemd[1]: Started sshd@24-172.31.22.232:22-139.178.89.65:38642.service - OpenSSH per-connection server daemon (139.178.89.65:38642).
Feb 13 19:51:37.556373 sshd[6614]: Accepted publickey for core from 139.178.89.65 port 38642 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:37.559631 sshd[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:37.568387 systemd-logind[2018]: New session 25 of user core.
Feb 13 19:51:37.577309 systemd[1]: Started session-25.scope - Session 25 of User core.
Feb 13 19:51:37.814916 sshd[6614]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:37.823716 systemd[1]: sshd@24-172.31.22.232:22-139.178.89.65:38642.service: Deactivated successfully.
Feb 13 19:51:37.831328 systemd[1]: session-25.scope: Deactivated successfully.
Feb 13 19:51:37.833120 systemd-logind[2018]: Session 25 logged out. Waiting for processes to exit.
Feb 13 19:51:37.834907 systemd-logind[2018]: Removed session 25.
Feb 13 19:51:42.846602 systemd[1]: Started sshd@25-172.31.22.232:22-139.178.89.65:38656.service - OpenSSH per-connection server daemon (139.178.89.65:38656).
Feb 13 19:51:43.029196 sshd[6632]: Accepted publickey for core from 139.178.89.65 port 38656 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4
Feb 13 19:51:43.031993 sshd[6632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:51:43.039997 systemd-logind[2018]: New session 26 of user core.
Feb 13 19:51:43.047499 systemd[1]: Started session-26.scope - Session 26 of User core.
Feb 13 19:51:43.290367 sshd[6632]: pam_unix(sshd:session): session closed for user core
Feb 13 19:51:43.295678 systemd[1]: sshd@25-172.31.22.232:22-139.178.89.65:38656.service: Deactivated successfully.
Feb 13 19:51:43.304491 systemd[1]: session-26.scope: Deactivated successfully.
Feb 13 19:51:43.308914 systemd-logind[2018]: Session 26 logged out. Waiting for processes to exit.
Feb 13 19:51:43.311142 systemd-logind[2018]: Removed session 26.
Feb 13 19:51:56.595822 containerd[2037]: time="2025-02-13T19:51:56.595449759Z" level=info msg="shim disconnected" id=d18ba869cf12fa4d0e11ecf1bd7d917f0a5bfaa013ab714cb6600c478a64ab58 namespace=k8s.io
Feb 13 19:51:56.596976 containerd[2037]: time="2025-02-13T19:51:56.595730235Z" level=warning msg="cleaning up after shim disconnected" id=d18ba869cf12fa4d0e11ecf1bd7d917f0a5bfaa013ab714cb6600c478a64ab58 namespace=k8s.io
Feb 13 19:51:56.596976 containerd[2037]: time="2025-02-13T19:51:56.596612319Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:51:56.603941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d18ba869cf12fa4d0e11ecf1bd7d917f0a5bfaa013ab714cb6600c478a64ab58-rootfs.mount: Deactivated successfully.
Feb 13 19:51:56.652719 containerd[2037]: time="2025-02-13T19:51:56.652644123Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 19:51:56.926776 containerd[2037]: time="2025-02-13T19:51:56.926244064Z" level=info msg="shim disconnected" id=4d50425a7b66bf972ff2ad7cf171e81b82d99e542b18284ef80b1af1bdd9c00b namespace=k8s.io
Feb 13 19:51:56.926776 containerd[2037]: time="2025-02-13T19:51:56.926434816Z" level=warning msg="cleaning up after shim disconnected" id=4d50425a7b66bf972ff2ad7cf171e81b82d99e542b18284ef80b1af1bdd9c00b namespace=k8s.io
Feb 13 19:51:56.926776 containerd[2037]: time="2025-02-13T19:51:56.926455756Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:51:56.931072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d50425a7b66bf972ff2ad7cf171e81b82d99e542b18284ef80b1af1bdd9c00b-rootfs.mount: Deactivated successfully.
Feb 13 19:51:57.145783 kubelet[3647]: I0213 19:51:57.145706    3647 scope.go:117] "RemoveContainer" containerID="d18ba869cf12fa4d0e11ecf1bd7d917f0a5bfaa013ab714cb6600c478a64ab58"
Feb 13 19:51:57.151116 containerd[2037]: time="2025-02-13T19:51:57.151060117Z" level=info msg="CreateContainer within sandbox \"0bca79a85286abf0724a1740e67c316562ea2ac89396304b3ffa3afbab82d662\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Feb 13 19:51:57.152233 kubelet[3647]: I0213 19:51:57.152190    3647 scope.go:117] "RemoveContainer" containerID="4d50425a7b66bf972ff2ad7cf171e81b82d99e542b18284ef80b1af1bdd9c00b"
Feb 13 19:51:57.161713 containerd[2037]: time="2025-02-13T19:51:57.161624521Z" level=info msg="CreateContainer within sandbox \"0f9b5ecb82286145ccd4e68bce9b3ce983f54b35980242f0a916a6b201b316eb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}"
Feb 13 19:51:57.183432 containerd[2037]: time="2025-02-13T19:51:57.182532289Z" level=info msg="CreateContainer within sandbox \"0bca79a85286abf0724a1740e67c316562ea2ac89396304b3ffa3afbab82d662\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b411cca05492c9427972865b44d5c54e296e688646a84b892117bd03734560b2\""
Feb 13 19:51:57.189363 containerd[2037]: time="2025-02-13T19:51:57.187418665Z" level=info msg="StartContainer for \"b411cca05492c9427972865b44d5c54e296e688646a84b892117bd03734560b2\""
Feb 13 19:51:57.201393 containerd[2037]: time="2025-02-13T19:51:57.201280982Z" level=info msg="CreateContainer within sandbox \"0f9b5ecb82286145ccd4e68bce9b3ce983f54b35980242f0a916a6b201b316eb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8005b18fb5d28ffaee78baca44a318ddb5d7344a582a26dec2deba436d532dea\""
Feb 13 19:51:57.203535 containerd[2037]: time="2025-02-13T19:51:57.202645490Z" level=info msg="StartContainer for \"8005b18fb5d28ffaee78baca44a318ddb5d7344a582a26dec2deba436d532dea\""
Feb 13 19:51:57.338231 containerd[2037]: time="2025-02-13T19:51:57.338150990Z" level=info msg="StartContainer for \"8005b18fb5d28ffaee78baca44a318ddb5d7344a582a26dec2deba436d532dea\" returns successfully"
Feb 13 19:51:57.344968 containerd[2037]: time="2025-02-13T19:51:57.344896298Z" level=info msg="StartContainer for \"b411cca05492c9427972865b44d5c54e296e688646a84b892117bd03734560b2\" returns successfully"
Feb 13 19:52:01.617885 containerd[2037]: time="2025-02-13T19:52:01.617527351Z" level=info msg="shim disconnected" id=231fb1d66f014d92eb3768517317a085bb286ba300732ab6c0d9db7a4a3c51dc namespace=k8s.io
Feb 13 19:52:01.617885 containerd[2037]: time="2025-02-13T19:52:01.617602675Z" level=warning msg="cleaning up after shim disconnected" id=231fb1d66f014d92eb3768517317a085bb286ba300732ab6c0d9db7a4a3c51dc namespace=k8s.io
Feb 13 19:52:01.617885 containerd[2037]: time="2025-02-13T19:52:01.617640631Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:52:01.624245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-231fb1d66f014d92eb3768517317a085bb286ba300732ab6c0d9db7a4a3c51dc-rootfs.mount: Deactivated successfully.
Feb 13 19:52:01.642599 containerd[2037]: time="2025-02-13T19:52:01.642533648Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:52:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 19:52:02.176804 kubelet[3647]: I0213 19:52:02.176753    3647 scope.go:117] "RemoveContainer" containerID="231fb1d66f014d92eb3768517317a085bb286ba300732ab6c0d9db7a4a3c51dc"
Feb 13 19:52:02.180864 containerd[2037]: time="2025-02-13T19:52:02.180747054Z" level=info msg="CreateContainer within sandbox \"71933d1aff7e62fe6899b5e934927354921c68863c5556c6f155e67ab1e4ead3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Feb 13 19:52:02.204549 containerd[2037]: time="2025-02-13T19:52:02.204412806Z" level=info msg="CreateContainer within sandbox \"71933d1aff7e62fe6899b5e934927354921c68863c5556c6f155e67ab1e4ead3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"55b1838985fa6d2bef136201e589437a0580335425924c6c5886b8c5ed4e6978\""
Feb 13 19:52:02.206075 containerd[2037]: time="2025-02-13T19:52:02.205151214Z" level=info msg="StartContainer for \"55b1838985fa6d2bef136201e589437a0580335425924c6c5886b8c5ed4e6978\""
Feb 13 19:52:02.324939 containerd[2037]: time="2025-02-13T19:52:02.324732835Z" level=info msg="StartContainer for \"55b1838985fa6d2bef136201e589437a0580335425924c6c5886b8c5ed4e6978\" returns successfully"
Feb 13 19:52:03.653537 kubelet[3647]: E0213 19:52:03.653465    3647 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": context deadline exceeded"
Feb 13 19:52:13.654174 kubelet[3647]: E0213 19:52:13.654080    3647 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-232?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"