Feb 13 20:43:36.462980 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb 13 20:43:36.463005 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025
Feb 13 20:43:36.463013 kernel: KASLR enabled
Feb 13 20:43:36.463019 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '')
Feb 13 20:43:36.463026 kernel: printk: bootconsole [pl11] enabled
Feb 13 20:43:36.463032 kernel: efi: EFI v2.7 by EDK II
Feb 13 20:43:36.463039 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 
Feb 13 20:43:36.463045 kernel: random: crng init done
Feb 13 20:43:36.463052 kernel: ACPI: Early table checksum verification disabled
Feb 13 20:43:36.463057 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL)
Feb 13 20:43:36.463063 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463070 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463077 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01   00000001 INTL 20230628)
Feb 13 20:43:36.463083 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463102 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463109 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463115 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463124 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463131 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463137 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000)
Feb 13 20:43:36.463144 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463152 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200
Feb 13 20:43:36.463164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff]
Feb 13 20:43:36.463171 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff]
Feb 13 20:43:36.463177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff]
Feb 13 20:43:36.463184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff]
Feb 13 20:43:36.463190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff]
Feb 13 20:43:36.463197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff]
Feb 13 20:43:36.463205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff]
Feb 13 20:43:36.463211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff]
Feb 13 20:43:36.463218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff]
Feb 13 20:43:36.463224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff]
Feb 13 20:43:36.463231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff]
Feb 13 20:43:36.463237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff]
Feb 13 20:43:36.463243 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff]
Feb 13 20:43:36.463249 kernel: Zone ranges:
Feb 13 20:43:36.463256 kernel:   DMA      [mem 0x0000000000000000-0x00000000ffffffff]
Feb 13 20:43:36.463262 kernel:   DMA32    empty
Feb 13 20:43:36.463268 kernel:   Normal   [mem 0x0000000100000000-0x00000001bfffffff]
Feb 13 20:43:36.463275 kernel: Movable zone start for each node
Feb 13 20:43:36.463286 kernel: Early memory node ranges
Feb 13 20:43:36.463293 kernel:   node   0: [mem 0x0000000000000000-0x00000000007fffff]
Feb 13 20:43:36.463300 kernel:   node   0: [mem 0x0000000000824000-0x000000003e54ffff]
Feb 13 20:43:36.463306 kernel:   node   0: [mem 0x000000003e550000-0x000000003e87ffff]
Feb 13 20:43:36.463313 kernel:   node   0: [mem 0x000000003e880000-0x000000003fc7ffff]
Feb 13 20:43:36.463321 kernel:   node   0: [mem 0x000000003fc80000-0x000000003fcfffff]
Feb 13 20:43:36.463329 kernel:   node   0: [mem 0x000000003fd00000-0x000000003fffffff]
Feb 13 20:43:36.463335 kernel:   node   0: [mem 0x0000000100000000-0x00000001bfffffff]
Feb 13 20:43:36.463342 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff]
Feb 13 20:43:36.463349 kernel: On node 0, zone DMA: 36 pages in unavailable ranges
Feb 13 20:43:36.463356 kernel: psci: probing for conduit method from ACPI.
Feb 13 20:43:36.463362 kernel: psci: PSCIv1.1 detected in firmware.
Feb 13 20:43:36.463370 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 20:43:36.463376 kernel: psci: MIGRATE_INFO_TYPE not supported.
Feb 13 20:43:36.463383 kernel: psci: SMC Calling Convention v1.4
Feb 13 20:43:36.463390 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0
Feb 13 20:43:36.463397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0
Feb 13 20:43:36.463405 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 20:43:36.463412 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 20:43:36.463418 kernel: pcpu-alloc: [0] 0 [0] 1 
Feb 13 20:43:36.463425 kernel: Detected PIPT I-cache on CPU0
Feb 13 20:43:36.463432 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 20:43:36.463438 kernel: CPU features: detected: Hardware dirty bit management
Feb 13 20:43:36.463445 kernel: CPU features: detected: Spectre-BHB
Feb 13 20:43:36.463452 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb 13 20:43:36.463459 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb 13 20:43:36.463476 kernel: CPU features: detected: ARM erratum 1418040
Feb 13 20:43:36.463497 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion)
Feb 13 20:43:36.463506 kernel: CPU features: detected: SSBS not fully self-synchronizing
Feb 13 20:43:36.463513 kernel: alternatives: applying boot alternatives
Feb 13 20:43:36.463523 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7
Feb 13 20:43:36.463531 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 20:43:36.463539 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 20:43:36.463547 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 20:43:36.463554 kernel: Fallback order for Node 0: 0 
Feb 13 20:43:36.463560 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1032156
Feb 13 20:43:36.463567 kernel: Policy zone: Normal
Feb 13 20:43:36.463574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 20:43:36.463580 kernel: software IO TLB: area num 2.
Feb 13 20:43:36.463589 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB)
Feb 13 20:43:36.463597 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved)
Feb 13 20:43:36.463604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 20:43:36.463610 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 20:43:36.463618 kernel: rcu:         RCU event tracing is enabled.
Feb 13 20:43:36.463625 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 20:43:36.463632 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 20:43:36.463639 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 20:43:36.463645 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 20:43:36.463652 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 20:43:36.463659 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 20:43:36.463667 kernel: GICv3: 960 SPIs implemented
Feb 13 20:43:36.463674 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 20:43:36.463681 kernel: Root IRQ handler: gic_handle_irq
Feb 13 20:43:36.463688 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Feb 13 20:43:36.463694 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000
Feb 13 20:43:36.463701 kernel: ITS: No ITS available, not enabling LPIs
Feb 13 20:43:36.463709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 20:43:36.463721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 20:43:36.463729 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb 13 20:43:36.463737 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb 13 20:43:36.463744 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb 13 20:43:36.463752 kernel: Console: colour dummy device 80x25
Feb 13 20:43:36.463759 kernel: printk: console [tty1] enabled
Feb 13 20:43:36.463766 kernel: ACPI: Core revision 20230628
Feb 13 20:43:36.463773 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb 13 20:43:36.463781 kernel: pid_max: default: 32768 minimum: 301
Feb 13 20:43:36.463788 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 20:43:36.463795 kernel: landlock: Up and running.
Feb 13 20:43:36.463812 kernel: SELinux:  Initializing.
Feb 13 20:43:36.463820 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 20:43:36.463827 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 20:43:36.463836 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 20:43:36.463843 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 20:43:36.463851 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1
Feb 13 20:43:36.463858 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0
Feb 13 20:43:36.463865 kernel: Hyper-V: enabling crash_kexec_post_notifiers
Feb 13 20:43:36.463872 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 20:43:36.463879 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 20:43:36.463892 kernel: Remapping and enabling EFI services.
Feb 13 20:43:36.463900 kernel: smp: Bringing up secondary CPUs ...
Feb 13 20:43:36.463907 kernel: Detected PIPT I-cache on CPU1
Feb 13 20:43:36.463915 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000
Feb 13 20:43:36.463923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 20:43:36.463931 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb 13 20:43:36.463939 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 20:43:36.463946 kernel: SMP: Total of 2 processors activated.
Feb 13 20:43:36.463954 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 20:43:36.463963 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence
Feb 13 20:43:36.463970 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb 13 20:43:36.463978 kernel: CPU features: detected: CRC32 instructions
Feb 13 20:43:36.463986 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb 13 20:43:36.463993 kernel: CPU features: detected: LSE atomic instructions
Feb 13 20:43:36.464000 kernel: CPU features: detected: Privileged Access Never
Feb 13 20:43:36.464007 kernel: CPU: All CPU(s) started at EL1
Feb 13 20:43:36.464015 kernel: alternatives: applying system-wide alternatives
Feb 13 20:43:36.464022 kernel: devtmpfs: initialized
Feb 13 20:43:36.464031 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 20:43:36.464038 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 20:43:36.464046 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 20:43:36.464053 kernel: SMBIOS 3.1.0 present.
Feb 13 20:43:36.464061 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024
Feb 13 20:43:36.464068 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 20:43:36.464076 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 20:43:36.464083 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 20:43:36.464096 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 20:43:36.464106 kernel: audit: initializing netlink subsys (disabled)
Feb 13 20:43:36.464113 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1
Feb 13 20:43:36.464120 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 20:43:36.464128 kernel: cpuidle: using governor menu
Feb 13 20:43:36.464136 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 20:43:36.464143 kernel: ASID allocator initialised with 32768 entries
Feb 13 20:43:36.464150 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 20:43:36.464158 kernel: Serial: AMBA PL011 UART driver
Feb 13 20:43:36.464165 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Feb 13 20:43:36.464174 kernel: Modules: 0 pages in range for non-PLT usage
Feb 13 20:43:36.464181 kernel: Modules: 509040 pages in range for PLT usage
Feb 13 20:43:36.464189 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 20:43:36.464196 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 20:43:36.464204 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 20:43:36.464211 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 20:43:36.464218 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 20:43:36.464226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 20:43:36.464233 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 20:43:36.464242 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 20:43:36.464249 kernel: ACPI: Added _OSI(Module Device)
Feb 13 20:43:36.464257 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 20:43:36.464264 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 20:43:36.464272 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 20:43:36.464279 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 20:43:36.464286 kernel: ACPI: Interpreter enabled
Feb 13 20:43:36.464294 kernel: ACPI: Using GIC for interrupt routing
Feb 13 20:43:36.464301 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA
Feb 13 20:43:36.464310 kernel: printk: console [ttyAMA0] enabled
Feb 13 20:43:36.464317 kernel: printk: bootconsole [pl11] disabled
Feb 13 20:43:36.464325 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA
Feb 13 20:43:36.464332 kernel: iommu: Default domain type: Translated
Feb 13 20:43:36.464339 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 20:43:36.464347 kernel: efivars: Registered efivars operations
Feb 13 20:43:36.464354 kernel: vgaarb: loaded
Feb 13 20:43:36.464361 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 20:43:36.464369 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 20:43:36.464378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 20:43:36.464385 kernel: pnp: PnP ACPI init
Feb 13 20:43:36.464392 kernel: pnp: PnP ACPI: found 0 devices
Feb 13 20:43:36.464400 kernel: NET: Registered PF_INET protocol family
Feb 13 20:43:36.464407 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 20:43:36.464414 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 20:43:36.464422 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 20:43:36.464429 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 20:43:36.464437 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 20:43:36.464446 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 20:43:36.464454 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 20:43:36.464461 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 20:43:36.464468 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 20:43:36.464476 kernel: PCI: CLS 0 bytes, default 64
Feb 13 20:43:36.464483 kernel: kvm [1]: HYP mode not available
Feb 13 20:43:36.464490 kernel: Initialise system trusted keyrings
Feb 13 20:43:36.464498 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 20:43:36.464505 kernel: Key type asymmetric registered
Feb 13 20:43:36.464514 kernel: Asymmetric key parser 'x509' registered
Feb 13 20:43:36.464522 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 20:43:36.464529 kernel: io scheduler mq-deadline registered
Feb 13 20:43:36.464536 kernel: io scheduler kyber registered
Feb 13 20:43:36.464544 kernel: io scheduler bfq registered
Feb 13 20:43:36.464551 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 20:43:36.464558 kernel: thunder_xcv, ver 1.0
Feb 13 20:43:36.464565 kernel: thunder_bgx, ver 1.0
Feb 13 20:43:36.464573 kernel: nicpf, ver 1.0
Feb 13 20:43:36.464580 kernel: nicvf, ver 1.0
Feb 13 20:43:36.464724 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 20:43:36.464823 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:43:35 UTC (1739479415)
Feb 13 20:43:36.464835 kernel: efifb: probing for efifb
Feb 13 20:43:36.464842 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k
Feb 13 20:43:36.464850 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Feb 13 20:43:36.464857 kernel: efifb: scrolling: redraw
Feb 13 20:43:36.464865 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Feb 13 20:43:36.464875 kernel: Console: switching to colour frame buffer device 128x48
Feb 13 20:43:36.464883 kernel: fb0: EFI VGA frame buffer device
Feb 13 20:43:36.464891 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping ....
Feb 13 20:43:36.464898 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 20:43:36.464905 kernel: No ACPI PMU IRQ for CPU0
Feb 13 20:43:36.464912 kernel: No ACPI PMU IRQ for CPU1
Feb 13 20:43:36.464920 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available
Feb 13 20:43:36.464928 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 20:43:36.464935 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 20:43:36.464944 kernel: NET: Registered PF_INET6 protocol family
Feb 13 20:43:36.464951 kernel: Segment Routing with IPv6
Feb 13 20:43:36.464959 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 20:43:36.464966 kernel: NET: Registered PF_PACKET protocol family
Feb 13 20:43:36.464973 kernel: Key type dns_resolver registered
Feb 13 20:43:36.464981 kernel: registered taskstats version 1
Feb 13 20:43:36.464988 kernel: Loading compiled-in X.509 certificates
Feb 13 20:43:36.464995 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec'
Feb 13 20:43:36.465003 kernel: Key type .fscrypt registered
Feb 13 20:43:36.465011 kernel: Key type fscrypt-provisioning registered
Feb 13 20:43:36.465019 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 20:43:36.465026 kernel: ima: Allocated hash algorithm: sha1
Feb 13 20:43:36.465033 kernel: ima: No architecture policies found
Feb 13 20:43:36.465041 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 20:43:36.465048 kernel: clk: Disabling unused clocks
Feb 13 20:43:36.465055 kernel: Freeing unused kernel memory: 39360K
Feb 13 20:43:36.465062 kernel: Run /init as init process
Feb 13 20:43:36.465070 kernel:   with arguments:
Feb 13 20:43:36.465079 kernel:     /init
Feb 13 20:43:36.465086 kernel:   with environment:
Feb 13 20:43:36.465101 kernel:     HOME=/
Feb 13 20:43:36.465109 kernel:     TERM=linux
Feb 13 20:43:36.465116 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 20:43:36.465126 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 20:43:36.465135 systemd[1]: Detected virtualization microsoft.
Feb 13 20:43:36.465143 systemd[1]: Detected architecture arm64.
Feb 13 20:43:36.465153 systemd[1]: Running in initrd.
Feb 13 20:43:36.465161 systemd[1]: No hostname configured, using default hostname.
Feb 13 20:43:36.465168 systemd[1]: Hostname set to <localhost>.
Feb 13 20:43:36.465176 systemd[1]: Initializing machine ID from random generator.
Feb 13 20:43:36.465184 systemd[1]: Queued start job for default target initrd.target.
Feb 13 20:43:36.465192 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 20:43:36.465200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 20:43:36.465209 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 20:43:36.465219 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 20:43:36.465227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 20:43:36.465235 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 20:43:36.465245 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 20:43:36.465253 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 20:43:36.465261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 20:43:36.465270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 20:43:36.465278 systemd[1]: Reached target paths.target - Path Units.
Feb 13 20:43:36.465286 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 20:43:36.465294 systemd[1]: Reached target swap.target - Swaps.
Feb 13 20:43:36.465302 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 20:43:36.465310 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 20:43:36.465318 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 20:43:36.465326 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 20:43:36.465334 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 20:43:36.465343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 20:43:36.465352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 20:43:36.465360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 20:43:36.465368 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 20:43:36.465375 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 20:43:36.465383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 20:43:36.465391 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 20:43:36.465399 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 20:43:36.465407 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 20:43:36.465416 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 20:43:36.465443 systemd-journald[217]: Collecting audit messages is disabled.
Feb 13 20:43:36.465462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 20:43:36.465471 systemd-journald[217]: Journal started
Feb 13 20:43:36.465491 systemd-journald[217]: Runtime Journal (/run/log/journal/22c9ca3924ee4e66b53ac5efa36a940e) is 8.0M, max 78.5M, 70.5M free.
Feb 13 20:43:36.463780 systemd-modules-load[218]: Inserted module 'overlay'
Feb 13 20:43:36.497507 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 20:43:36.497563 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 20:43:36.517700 kernel: Bridge firewalling registered
Feb 13 20:43:36.517750 systemd-modules-load[218]: Inserted module 'br_netfilter'
Feb 13 20:43:36.521292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 20:43:36.535351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 20:43:36.549821 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 20:43:36.563389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 20:43:36.575220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:36.599420 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 20:43:36.608257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 20:43:36.638821 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 20:43:36.677286 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 20:43:36.687879 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 20:43:36.712270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 20:43:36.720396 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 20:43:36.729193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 20:43:36.763420 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 20:43:36.779985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 20:43:36.797864 dracut-cmdline[250]: dracut-dracut-053
Feb 13 20:43:36.797864 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7
Feb 13 20:43:36.792311 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 20:43:36.869489 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 20:43:36.870120 systemd-resolved[260]: Positive Trust Anchors:
Feb 13 20:43:36.870131 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 20:43:36.870162 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 20:43:36.872478 systemd-resolved[260]: Defaulting to hostname 'linux'.
Feb 13 20:43:36.894823 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 20:43:36.904387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 20:43:37.027122 kernel: SCSI subsystem initialized
Feb 13 20:43:37.036132 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 20:43:37.047209 kernel: iscsi: registered transport (tcp)
Feb 13 20:43:37.066072 kernel: iscsi: registered transport (qla4xxx)
Feb 13 20:43:37.066106 kernel: QLogic iSCSI HBA Driver
Feb 13 20:43:37.106980 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 20:43:37.128265 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 20:43:37.161903 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 20:43:37.161950 kernel: device-mapper: uevent: version 1.0.3
Feb 13 20:43:37.169188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 20:43:37.221121 kernel: raid6: neonx8   gen() 15777 MB/s
Feb 13 20:43:37.241104 kernel: raid6: neonx4   gen() 15656 MB/s
Feb 13 20:43:37.261101 kernel: raid6: neonx2   gen() 13233 MB/s
Feb 13 20:43:37.282102 kernel: raid6: neonx1   gen() 10483 MB/s
Feb 13 20:43:37.302105 kernel: raid6: int64x8  gen()  6958 MB/s
Feb 13 20:43:37.322101 kernel: raid6: int64x4  gen()  7352 MB/s
Feb 13 20:43:37.343102 kernel: raid6: int64x2  gen()  6133 MB/s
Feb 13 20:43:37.367436 kernel: raid6: int64x1  gen()  5061 MB/s
Feb 13 20:43:37.367447 kernel: raid6: using algorithm neonx8 gen() 15777 MB/s
Feb 13 20:43:37.392154 kernel: raid6: .... xor() 11932 MB/s, rmw enabled
Feb 13 20:43:37.392177 kernel: raid6: using neon recovery algorithm
Feb 13 20:43:37.405493 kernel: xor: measuring software checksum speed
Feb 13 20:43:37.405510 kernel:    8regs           : 19836 MB/sec
Feb 13 20:43:37.410505 kernel:    32regs          : 19622 MB/sec
Feb 13 20:43:37.414740 kernel:    arm64_neon      : 26927 MB/sec
Feb 13 20:43:37.419365 kernel: xor: using function: arm64_neon (26927 MB/sec)
Feb 13 20:43:37.470115 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 20:43:37.481704 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 20:43:37.500255 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 20:43:37.524456 systemd-udevd[438]: Using default interface naming scheme 'v255'.
Feb 13 20:43:37.529922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 20:43:37.550226 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 20:43:37.575327 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation
Feb 13 20:43:37.610771 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 20:43:37.630322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 20:43:37.672342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 20:43:37.691285 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 20:43:37.716237 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 20:43:37.728313 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 20:43:37.746300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 20:43:37.765593 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 20:43:37.797247 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 20:43:37.828763 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 20:43:37.849464 kernel: hv_vmbus: Vmbus version:5.3
Feb 13 20:43:37.828940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 20:43:37.876193 kernel: hv_vmbus: registering driver hid_hyperv
Feb 13 20:43:37.848430 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 20:43:37.933363 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0
Feb 13 20:43:37.933388 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 13 20:43:37.933399 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on 
Feb 13 20:43:37.933555 kernel: hv_vmbus: registering driver hv_storvsc
Feb 13 20:43:37.933566 kernel: hv_vmbus: registering driver hyperv_keyboard
Feb 13 20:43:37.857397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 20:43:37.973009 kernel: scsi host0: storvsc_host_t
Feb 13 20:43:37.973184 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 13 20:43:37.973198 kernel: scsi host1: storvsc_host_t
Feb 13 20:43:37.973291 kernel: scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
Feb 13 20:43:37.973312 kernel: hv_vmbus: registering driver hv_netvsc
Feb 13 20:43:37.857573 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:37.992445 kernel: scsi 0:0:0:2: CD-ROM            Msft     Virtual DVD-ROM  1.0  PQ: 0 ANSI: 0
Feb 13 20:43:37.992489 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1
Feb 13 20:43:37.896770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 20:43:37.990913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 20:43:38.020599 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 20:43:38.058378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 20:43:38.086305 kernel: PTP clock support registered
Feb 13 20:43:38.086329 kernel: hv_netvsc 000d3ac4-5f98-000d-3ac4-5f98000d3ac4 eth0: VF slot 1 added
Feb 13 20:43:38.058488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:38.107767 kernel: sr 0:0:0:2: [sr0] scsi-1 drive
Feb 13 20:43:38.125685 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 13 20:43:38.125708 kernel: hv_utils: Registering HyperV Utility Driver
Feb 13 20:43:38.125720 kernel: hv_vmbus: registering driver hv_utils
Feb 13 20:43:38.125737 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0
Feb 13 20:43:38.107683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 20:43:38.143652 kernel: hv_vmbus: registering driver hv_pci
Feb 13 20:43:38.143676 kernel: hv_utils: Heartbeat IC version 3.0
Feb 13 20:43:38.143691 kernel: hv_utils: Shutdown IC version 3.2
Feb 13 20:43:38.149128 kernel: hv_utils: TimeSync IC version 4.0
Feb 13 20:43:37.652574 systemd-resolved[260]: Clock change detected. Flushing caches.
Feb 13 20:43:37.680817 kernel: hv_pci ab336f7c-24bc-4014-987a-76eb1b260486: PCI VMBus probing: Using version 0x10004
Feb 13 20:43:37.880544 systemd-journald[217]: Time jumped backwards, rotating.
Feb 13 20:43:37.880608 kernel: hv_pci ab336f7c-24bc-4014-987a-76eb1b260486: PCI host bridge to bus 24bc:00
Feb 13 20:43:37.880713 kernel: pci_bus 24bc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window]
Feb 13 20:43:37.880811 kernel: pci_bus 24bc:00: No busn resource found for root bus, will use [bus 00-ff]
Feb 13 20:43:37.880888 kernel: pci 24bc:00:02.0: [15b3:1018] type 00 class 0x020000
Feb 13 20:43:37.880988 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB)
Feb 13 20:43:37.883921 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks
Feb 13 20:43:37.884031 kernel: pci 24bc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref]
Feb 13 20:43:37.884123 kernel: sd 0:0:0:0: [sda] Write Protect is off
Feb 13 20:43:37.884207 kernel: pci 24bc:00:02.0: enabling Extended Tags
Feb 13 20:43:37.884289 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00
Feb 13 20:43:37.884407 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
Feb 13 20:43:37.884495 kernel: pci 24bc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 24bc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
Feb 13 20:43:37.884578 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 20:43:37.884588 kernel: pci_bus 24bc:00: busn_res: [bus 00-ff] end is updated to 00
Feb 13 20:43:37.884672 kernel: pci 24bc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref]
Feb 13 20:43:37.884754 kernel: sd 0:0:0:0: [sda] Attached SCSI disk
Feb 13 20:43:37.658938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:37.694622 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 20:43:37.820363 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 20:43:37.950875 kernel: mlx5_core 24bc:00:02.0: enabling device (0000 -> 0002)
Feb 13 20:43:38.271406 kernel: mlx5_core 24bc:00:02.0: firmware version: 16.30.1284
Feb 13 20:43:38.271543 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (489)
Feb 13 20:43:38.271554 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (494)
Feb 13 20:43:38.271564 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 20:43:38.271573 kernel: hv_netvsc 000d3ac4-5f98-000d-3ac4-5f98000d3ac4 eth0: VF registering: eth1
Feb 13 20:43:38.271667 kernel: mlx5_core 24bc:00:02.0 eth1: joined to eth0
Feb 13 20:43:38.271761 kernel: mlx5_core 24bc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic)
Feb 13 20:43:38.064894 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM.
Feb 13 20:43:38.111544 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Feb 13 20:43:38.130353 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT.
Feb 13 20:43:38.313725 kernel: mlx5_core 24bc:00:02.0 enP9404s1: renamed from eth1
Feb 13 20:43:38.143534 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A.
Feb 13 20:43:38.151884 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A.
Feb 13 20:43:38.165486 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 20:43:39.203373 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 20:43:39.203424 disk-uuid[603]: The operation has completed successfully.
Feb 13 20:43:39.265792 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 20:43:39.265902 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 20:43:39.300485 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 20:43:39.315648 sh[717]: Success
Feb 13 20:43:39.330391 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 20:43:39.406866 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 20:43:39.414492 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 20:43:39.429418 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 20:43:39.470122 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6
Feb 13 20:43:39.470202 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 20:43:39.478262 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 20:43:39.484921 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 20:43:39.489884 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 20:43:39.560048 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 20:43:39.566446 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 20:43:39.583595 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 20:43:39.610215 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:39.610284 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 20:43:39.604670 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 20:43:39.635351 kernel: BTRFS info (device sda6): using free space tree
Feb 13 20:43:39.635377 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 20:43:39.648645 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 20:43:39.654353 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:39.661603 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 20:43:39.678715 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 20:43:39.741935 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 20:43:39.763633 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 20:43:39.789923 systemd-networkd[901]: lo: Link UP
Feb 13 20:43:39.789936 systemd-networkd[901]: lo: Gained carrier
Feb 13 20:43:39.791570 systemd-networkd[901]: Enumeration completed
Feb 13 20:43:39.793979 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 20:43:39.794841 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 20:43:39.794845 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 20:43:39.801903 systemd[1]: Reached target network.target - Network.
Feb 13 20:43:39.898682 kernel: mlx5_core 24bc:00:02.0 enP9404s1: Link up
Feb 13 20:43:39.942028 kernel: hv_netvsc 000d3ac4-5f98-000d-3ac4-5f98000d3ac4 eth0: Data path switched to VF: enP9404s1
Feb 13 20:43:39.941729 systemd-networkd[901]: enP9404s1: Link UP
Feb 13 20:43:39.941804 systemd-networkd[901]: eth0: Link UP
Feb 13 20:43:39.941921 systemd-networkd[901]: eth0: Gained carrier
Feb 13 20:43:39.941929 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 20:43:39.967615 systemd-networkd[901]: enP9404s1: Gained carrier
Feb 13 20:43:39.980397 systemd-networkd[901]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16
Feb 13 20:43:39.980961 ignition[837]: Ignition 2.19.0
Feb 13 20:43:39.993313 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 20:43:39.980967 ignition[837]: Stage: fetch-offline
Feb 13 20:43:39.981001 ignition[837]: no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:39.981009 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:40.010675 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 20:43:39.981115 ignition[837]: parsed url from cmdline: ""
Feb 13 20:43:39.981119 ignition[837]: no config URL provided
Feb 13 20:43:39.981123 ignition[837]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 20:43:39.981130 ignition[837]: no config at "/usr/lib/ignition/user.ign"
Feb 13 20:43:39.981135 ignition[837]: failed to fetch config: resource requires networking
Feb 13 20:43:39.989524 ignition[837]: Ignition finished successfully
Feb 13 20:43:40.031998 ignition[911]: Ignition 2.19.0
Feb 13 20:43:40.032089 ignition[911]: Stage: fetch
Feb 13 20:43:40.032361 ignition[911]: no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:40.032371 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:40.032504 ignition[911]: parsed url from cmdline: ""
Feb 13 20:43:40.032508 ignition[911]: no config URL provided
Feb 13 20:43:40.032513 ignition[911]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 20:43:40.032524 ignition[911]: no config at "/usr/lib/ignition/user.ign"
Feb 13 20:43:40.032554 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1
Feb 13 20:43:40.124099 ignition[911]: GET result: OK
Feb 13 20:43:40.124200 ignition[911]: config has been read from IMDS userdata
Feb 13 20:43:40.124258 ignition[911]: parsing config with SHA512: a2fddc2306c822540e5eb199c4094a0b1a6c1c2c683989be3ccd73083cfda083f7badea6d005b91a3bbf8d591f065448d1bbdef24ae4b21fb5e551392d964f6f
Feb 13 20:43:40.128692 unknown[911]: fetched base config from "system"
Feb 13 20:43:40.129135 ignition[911]: fetch: fetch complete
Feb 13 20:43:40.128700 unknown[911]: fetched base config from "system"
Feb 13 20:43:40.129139 ignition[911]: fetch: fetch passed
Feb 13 20:43:40.128712 unknown[911]: fetched user config from "azure"
Feb 13 20:43:40.129180 ignition[911]: Ignition finished successfully
Feb 13 20:43:40.133488 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 20:43:40.150624 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 20:43:40.176059 ignition[918]: Ignition 2.19.0
Feb 13 20:43:40.185121 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 20:43:40.176066 ignition[918]: Stage: kargs
Feb 13 20:43:40.176355 ignition[918]: no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:40.176370 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:40.177415 ignition[918]: kargs: kargs passed
Feb 13 20:43:40.177466 ignition[918]: Ignition finished successfully
Feb 13 20:43:40.213687 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 20:43:40.233599 ignition[925]: Ignition 2.19.0
Feb 13 20:43:40.233607 ignition[925]: Stage: disks
Feb 13 20:43:40.236739 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 20:43:40.233818 ignition[925]: no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:40.243892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 20:43:40.233827 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:40.253868 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 20:43:40.235564 ignition[925]: disks: disks passed
Feb 13 20:43:40.266122 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 20:43:40.235624 ignition[925]: Ignition finished successfully
Feb 13 20:43:40.277287 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 20:43:40.289879 systemd[1]: Reached target basic.target - Basic System.
Feb 13 20:43:40.318579 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 20:43:40.353684 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 20:43:40.370153 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks
Feb 13 20:43:40.377632 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 20:43:40.437348 kernel: EXT4-fs (sda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none.
Feb 13 20:43:40.437745 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 20:43:40.442738 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 20:43:40.474519 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 20:43:40.483503 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 20:43:40.501761 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Feb 13 20:43:40.527513 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944)
Feb 13 20:43:40.527539 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:40.519788 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 20:43:40.572635 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 20:43:40.572661 kernel: BTRFS info (device sda6): using free space tree
Feb 13 20:43:40.519832 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 20:43:40.558025 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 20:43:40.600382 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 20:43:40.601527 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 20:43:40.610629 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 20:43:40.685325 coreos-metadata[946]: Feb 13 20:43:40.685 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Feb 13 20:43:40.697337 coreos-metadata[946]: Feb 13 20:43:40.697 INFO Fetch successful
Feb 13 20:43:40.697337 coreos-metadata[946]: Feb 13 20:43:40.697 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1
Feb 13 20:43:40.719359 coreos-metadata[946]: Feb 13 20:43:40.718 INFO Fetch successful
Feb 13 20:43:40.727434 coreos-metadata[946]: Feb 13 20:43:40.726 INFO wrote hostname ci-4081.3.1-a-d3f644b76a to /sysroot/etc/hostname
Feb 13 20:43:40.729218 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 20:43:40.789749 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 20:43:40.805111 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory
Feb 13 20:43:40.820979 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 20:43:40.833170 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 20:43:41.100454 systemd-networkd[901]: eth0: Gained IPv6LL
Feb 13 20:43:41.104601 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 20:43:41.124588 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 20:43:41.139639 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 20:43:41.162635 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:41.155629 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 20:43:41.186543 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 20:43:41.203351 ignition[1062]: INFO     : Ignition 2.19.0
Feb 13 20:43:41.203351 ignition[1062]: INFO     : Stage: mount
Feb 13 20:43:41.203351 ignition[1062]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:41.203351 ignition[1062]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:41.203351 ignition[1062]: INFO     : mount: mount passed
Feb 13 20:43:41.243410 ignition[1062]: INFO     : Ignition finished successfully
Feb 13 20:43:41.209863 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 20:43:41.237562 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 20:43:41.258573 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 20:43:41.300218 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1074)
Feb 13 20:43:41.300244 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:41.307591 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 20:43:41.313267 kernel: BTRFS info (device sda6): using free space tree
Feb 13 20:43:41.321362 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 20:43:41.323414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 20:43:41.352356 ignition[1091]: INFO     : Ignition 2.19.0
Feb 13 20:43:41.352356 ignition[1091]: INFO     : Stage: files
Feb 13 20:43:41.362527 ignition[1091]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:41.362527 ignition[1091]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:41.362527 ignition[1091]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 20:43:41.362527 ignition[1091]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 20:43:41.362527 ignition[1091]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 20:43:41.401142 ignition[1091]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 20:43:41.401142 ignition[1091]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 20:43:41.401142 ignition[1091]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 20:43:41.400861 unknown[1091]: wrote ssh authorized keys file for user: core
Feb 13 20:43:41.436208 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 20:43:41.436208 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Feb 13 20:43:41.475153 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb 13 20:43:41.596778 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1
Feb 13 20:43:41.746499 systemd-networkd[901]: enP9404s1: Gained IPv6LL
Feb 13 20:43:42.076849 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Feb 13 20:43:42.322600 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 20:43:42.322600 ignition[1091]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(d): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(d): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: createResultFile: createFiles: op(e): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: files passed
Feb 13 20:43:42.344655 ignition[1091]: INFO     : Ignition finished successfully
Feb 13 20:43:42.342502 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 20:43:42.390957 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 20:43:42.401566 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 20:43:42.426383 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 20:43:42.499419 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 20:43:42.499419 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 20:43:42.426494 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 20:43:42.533673 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 20:43:42.448525 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 20:43:42.460466 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 20:43:42.492582 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 20:43:42.535722 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 20:43:42.535814 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 20:43:42.552836 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 20:43:42.568006 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 20:43:42.581706 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 20:43:42.604597 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 20:43:42.646373 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 20:43:42.662559 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 20:43:42.680278 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 20:43:42.687398 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 20:43:42.701119 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 20:43:42.714368 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 20:43:42.714493 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 20:43:42.731452 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 20:43:42.737640 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 20:43:42.749986 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 20:43:42.761746 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 20:43:42.772819 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 20:43:42.785094 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 20:43:42.797178 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 20:43:42.811796 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 20:43:42.824414 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 20:43:42.838786 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 20:43:42.850235 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 20:43:42.850371 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 20:43:42.867369 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 20:43:42.874389 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 20:43:42.887922 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 20:43:42.888008 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 20:43:42.901877 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 20:43:42.901999 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 20:43:42.922135 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 20:43:42.922263 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 20:43:42.930297 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 20:43:42.930406 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 20:43:42.942781 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Feb 13 20:43:42.942882 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 20:43:43.036095 ignition[1143]: INFO     : Ignition 2.19.0
Feb 13 20:43:43.036095 ignition[1143]: INFO     : Stage: umount
Feb 13 20:43:43.036095 ignition[1143]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:43.036095 ignition[1143]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:43.036095 ignition[1143]: INFO     : umount: umount passed
Feb 13 20:43:43.036095 ignition[1143]: INFO     : Ignition finished successfully
Feb 13 20:43:42.972664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 20:43:42.996720 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 20:43:43.006347 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 20:43:43.006517 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 20:43:43.022571 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 20:43:43.022693 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 20:43:43.050060 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 20:43:43.050784 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 20:43:43.050893 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 20:43:43.059154 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 20:43:43.059419 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 20:43:43.079948 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 20:43:43.080013 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 20:43:43.086478 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 20:43:43.086523 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 20:43:43.096899 systemd[1]: Stopped target network.target - Network.
Feb 13 20:43:43.106537 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 20:43:43.106602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 20:43:43.119149 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 20:43:43.135625 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 20:43:43.142834 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 20:43:43.151201 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 20:43:43.162815 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 20:43:43.174992 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 20:43:43.175120 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 20:43:43.185365 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 20:43:43.185445 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 20:43:43.195967 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 20:43:43.196019 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 20:43:43.207550 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 20:43:43.207591 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 20:43:43.219931 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 20:43:43.231920 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 20:43:43.244722 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 20:43:43.244807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 20:43:43.257377 systemd-networkd[901]: eth0: DHCPv6 lease lost
Feb 13 20:43:43.528493 kernel: hv_netvsc 000d3ac4-5f98-000d-3ac4-5f98000d3ac4 eth0: Data path switched from VF: enP9404s1
Feb 13 20:43:43.257857 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 20:43:43.257966 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 20:43:43.279040 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 20:43:43.281264 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 20:43:43.294213 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 20:43:43.294277 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 20:43:43.329843 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 20:43:43.343607 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 20:43:43.343687 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 20:43:43.357623 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 20:43:43.357687 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 20:43:43.370606 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 20:43:43.370659 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 20:43:43.382916 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 20:43:43.382964 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 20:43:43.396928 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 20:43:43.416187 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 20:43:43.416292 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 20:43:43.448785 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 20:43:43.448926 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 20:43:43.466124 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 20:43:43.466205 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 20:43:43.479084 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 20:43:43.479143 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 20:43:43.492786 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 20:43:43.492848 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 20:43:43.523231 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 20:43:43.523303 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 20:43:43.543066 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 20:43:43.845078 systemd-journald[217]: Received SIGTERM from PID 1 (systemd).
Feb 13 20:43:43.543137 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 20:43:43.565400 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 20:43:43.565457 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 20:43:43.604594 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 20:43:43.623537 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 20:43:43.623622 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 20:43:43.642569 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Feb 13 20:43:43.642642 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 20:43:43.658909 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 20:43:43.658961 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 20:43:43.680984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 20:43:43.681051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:43.696219 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 20:43:43.696340 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 20:43:43.709394 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 20:43:43.709478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 20:43:43.727256 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 20:43:43.760591 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 20:43:43.782480 systemd[1]: Switching root.
Feb 13 20:43:43.988405 systemd-journald[217]: Journal stopped
Feb 13 20:43:36.462980 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb 13 20:43:36.463005 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025
Feb 13 20:43:36.463013 kernel: KASLR enabled
Feb 13 20:43:36.463019 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '')
Feb 13 20:43:36.463026 kernel: printk: bootconsole [pl11] enabled
Feb 13 20:43:36.463032 kernel: efi: EFI v2.7 by EDK II
Feb 13 20:43:36.463039 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 
Feb 13 20:43:36.463045 kernel: random: crng init done
Feb 13 20:43:36.463052 kernel: ACPI: Early table checksum verification disabled
Feb 13 20:43:36.463057 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL)
Feb 13 20:43:36.463063 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463070 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463077 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01   00000001 INTL 20230628)
Feb 13 20:43:36.463083 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463102 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463109 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463115 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463124 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463131 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463137 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000)
Feb 13 20:43:36.463144 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 20:43:36.463152 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200
Feb 13 20:43:36.463164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff]
Feb 13 20:43:36.463171 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff]
Feb 13 20:43:36.463177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff]
Feb 13 20:43:36.463184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff]
Feb 13 20:43:36.463190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff]
Feb 13 20:43:36.463197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff]
Feb 13 20:43:36.463205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff]
Feb 13 20:43:36.463211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff]
Feb 13 20:43:36.463218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff]
Feb 13 20:43:36.463224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff]
Feb 13 20:43:36.463231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff]
Feb 13 20:43:36.463237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff]
Feb 13 20:43:36.463243 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff]
Feb 13 20:43:36.463249 kernel: Zone ranges:
Feb 13 20:43:36.463256 kernel:   DMA      [mem 0x0000000000000000-0x00000000ffffffff]
Feb 13 20:43:36.463262 kernel:   DMA32    empty
Feb 13 20:43:36.463268 kernel:   Normal   [mem 0x0000000100000000-0x00000001bfffffff]
Feb 13 20:43:36.463275 kernel: Movable zone start for each node
Feb 13 20:43:36.463286 kernel: Early memory node ranges
Feb 13 20:43:36.463293 kernel:   node   0: [mem 0x0000000000000000-0x00000000007fffff]
Feb 13 20:43:36.463300 kernel:   node   0: [mem 0x0000000000824000-0x000000003e54ffff]
Feb 13 20:43:36.463306 kernel:   node   0: [mem 0x000000003e550000-0x000000003e87ffff]
Feb 13 20:43:36.463313 kernel:   node   0: [mem 0x000000003e880000-0x000000003fc7ffff]
Feb 13 20:43:36.463321 kernel:   node   0: [mem 0x000000003fc80000-0x000000003fcfffff]
Feb 13 20:43:36.463329 kernel:   node   0: [mem 0x000000003fd00000-0x000000003fffffff]
Feb 13 20:43:36.463335 kernel:   node   0: [mem 0x0000000100000000-0x00000001bfffffff]
Feb 13 20:43:36.463342 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff]
Feb 13 20:43:36.463349 kernel: On node 0, zone DMA: 36 pages in unavailable ranges
Feb 13 20:43:36.463356 kernel: psci: probing for conduit method from ACPI.
Feb 13 20:43:36.463362 kernel: psci: PSCIv1.1 detected in firmware.
Feb 13 20:43:36.463370 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 20:43:36.463376 kernel: psci: MIGRATE_INFO_TYPE not supported.
Feb 13 20:43:36.463383 kernel: psci: SMC Calling Convention v1.4
Feb 13 20:43:36.463390 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0
Feb 13 20:43:36.463397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0
Feb 13 20:43:36.463405 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 20:43:36.463412 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 20:43:36.463418 kernel: pcpu-alloc: [0] 0 [0] 1 
Feb 13 20:43:36.463425 kernel: Detected PIPT I-cache on CPU0
Feb 13 20:43:36.463432 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 20:43:36.463438 kernel: CPU features: detected: Hardware dirty bit management
Feb 13 20:43:36.463445 kernel: CPU features: detected: Spectre-BHB
Feb 13 20:43:36.463452 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb 13 20:43:36.463459 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb 13 20:43:36.463476 kernel: CPU features: detected: ARM erratum 1418040
Feb 13 20:43:36.463497 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion)
Feb 13 20:43:36.463506 kernel: CPU features: detected: SSBS not fully self-synchronizing
Feb 13 20:43:36.463513 kernel: alternatives: applying boot alternatives
Feb 13 20:43:36.463523 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7
Feb 13 20:43:36.463531 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 20:43:36.463539 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 20:43:36.463547 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 20:43:36.463554 kernel: Fallback order for Node 0: 0 
Feb 13 20:43:36.463560 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1032156
Feb 13 20:43:36.463567 kernel: Policy zone: Normal
Feb 13 20:43:36.463574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 20:43:36.463580 kernel: software IO TLB: area num 2.
Feb 13 20:43:36.463589 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB)
Feb 13 20:43:36.463597 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved)
Feb 13 20:43:36.463604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 20:43:36.463610 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 20:43:36.463618 kernel: rcu:         RCU event tracing is enabled.
Feb 13 20:43:36.463625 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 20:43:36.463632 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 20:43:36.463639 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 20:43:36.463645 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 20:43:36.463652 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 20:43:36.463659 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 20:43:36.463667 kernel: GICv3: 960 SPIs implemented
Feb 13 20:43:36.463674 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 20:43:36.463681 kernel: Root IRQ handler: gic_handle_irq
Feb 13 20:43:36.463688 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Feb 13 20:43:36.463694 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000
Feb 13 20:43:36.463701 kernel: ITS: No ITS available, not enabling LPIs
Feb 13 20:43:36.463709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 20:43:36.463721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 20:43:36.463729 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb 13 20:43:36.463737 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb 13 20:43:36.463744 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb 13 20:43:36.463752 kernel: Console: colour dummy device 80x25
Feb 13 20:43:36.463759 kernel: printk: console [tty1] enabled
Feb 13 20:43:36.463766 kernel: ACPI: Core revision 20230628
Feb 13 20:43:36.463773 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb 13 20:43:36.463781 kernel: pid_max: default: 32768 minimum: 301
Feb 13 20:43:36.463788 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 20:43:36.463795 kernel: landlock: Up and running.
Feb 13 20:43:36.463812 kernel: SELinux:  Initializing.
Feb 13 20:43:36.463820 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 20:43:36.463827 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 20:43:36.463836 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 20:43:36.463843 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 20:43:36.463851 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1
Feb 13 20:43:36.463858 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0
Feb 13 20:43:36.463865 kernel: Hyper-V: enabling crash_kexec_post_notifiers
Feb 13 20:43:36.463872 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 20:43:36.463879 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 20:43:36.463892 kernel: Remapping and enabling EFI services.
Feb 13 20:43:36.463900 kernel: smp: Bringing up secondary CPUs ...
Feb 13 20:43:36.463907 kernel: Detected PIPT I-cache on CPU1
Feb 13 20:43:36.463915 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000
Feb 13 20:43:36.463923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 20:43:36.463931 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb 13 20:43:36.463939 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 20:43:36.463946 kernel: SMP: Total of 2 processors activated.
Feb 13 20:43:36.463954 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 20:43:36.463963 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence
Feb 13 20:43:36.463970 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb 13 20:43:36.463978 kernel: CPU features: detected: CRC32 instructions
Feb 13 20:43:36.463986 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb 13 20:43:36.463993 kernel: CPU features: detected: LSE atomic instructions
Feb 13 20:43:36.464000 kernel: CPU features: detected: Privileged Access Never
Feb 13 20:43:36.464007 kernel: CPU: All CPU(s) started at EL1
Feb 13 20:43:36.464015 kernel: alternatives: applying system-wide alternatives
Feb 13 20:43:36.464022 kernel: devtmpfs: initialized
Feb 13 20:43:36.464031 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 20:43:36.464038 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 20:43:36.464046 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 20:43:36.464053 kernel: SMBIOS 3.1.0 present.
Feb 13 20:43:36.464061 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024
Feb 13 20:43:36.464068 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 20:43:36.464076 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 20:43:36.464083 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 20:43:36.464096 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 20:43:36.464106 kernel: audit: initializing netlink subsys (disabled)
Feb 13 20:43:36.464113 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1
Feb 13 20:43:36.464120 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 20:43:36.464128 kernel: cpuidle: using governor menu
Feb 13 20:43:36.464136 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 20:43:36.464143 kernel: ASID allocator initialised with 32768 entries
Feb 13 20:43:36.464150 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 20:43:36.464158 kernel: Serial: AMBA PL011 UART driver
Feb 13 20:43:36.464165 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Feb 13 20:43:36.464174 kernel: Modules: 0 pages in range for non-PLT usage
Feb 13 20:43:36.464181 kernel: Modules: 509040 pages in range for PLT usage
Feb 13 20:43:36.464189 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 20:43:36.464196 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 20:43:36.464204 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 20:43:36.464211 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 20:43:36.464218 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 20:43:36.464226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 20:43:36.464233 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 20:43:36.464242 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 20:43:36.464249 kernel: ACPI: Added _OSI(Module Device)
Feb 13 20:43:36.464257 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 20:43:36.464264 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 20:43:36.464272 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 20:43:36.464279 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 20:43:36.464286 kernel: ACPI: Interpreter enabled
Feb 13 20:43:36.464294 kernel: ACPI: Using GIC for interrupt routing
Feb 13 20:43:36.464301 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA
Feb 13 20:43:36.464310 kernel: printk: console [ttyAMA0] enabled
Feb 13 20:43:36.464317 kernel: printk: bootconsole [pl11] disabled
Feb 13 20:43:36.464325 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA
Feb 13 20:43:36.464332 kernel: iommu: Default domain type: Translated
Feb 13 20:43:36.464339 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 20:43:36.464347 kernel: efivars: Registered efivars operations
Feb 13 20:43:36.464354 kernel: vgaarb: loaded
Feb 13 20:43:36.464361 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 20:43:36.464369 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 20:43:36.464378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 20:43:36.464385 kernel: pnp: PnP ACPI init
Feb 13 20:43:36.464392 kernel: pnp: PnP ACPI: found 0 devices
Feb 13 20:43:36.464400 kernel: NET: Registered PF_INET protocol family
Feb 13 20:43:36.464407 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 20:43:36.464414 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 20:43:36.464422 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 20:43:36.464429 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 20:43:36.464437 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 20:43:36.464446 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 20:43:36.464454 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 20:43:36.464461 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 20:43:36.464468 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 20:43:36.464476 kernel: PCI: CLS 0 bytes, default 64
Feb 13 20:43:36.464483 kernel: kvm [1]: HYP mode not available
Feb 13 20:43:36.464490 kernel: Initialise system trusted keyrings
Feb 13 20:43:36.464498 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 20:43:36.464505 kernel: Key type asymmetric registered
Feb 13 20:43:36.464514 kernel: Asymmetric key parser 'x509' registered
Feb 13 20:43:36.464522 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 20:43:36.464529 kernel: io scheduler mq-deadline registered
Feb 13 20:43:36.464536 kernel: io scheduler kyber registered
Feb 13 20:43:36.464544 kernel: io scheduler bfq registered
Feb 13 20:43:36.464551 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 20:43:36.464558 kernel: thunder_xcv, ver 1.0
Feb 13 20:43:36.464565 kernel: thunder_bgx, ver 1.0
Feb 13 20:43:36.464573 kernel: nicpf, ver 1.0
Feb 13 20:43:36.464580 kernel: nicvf, ver 1.0
Feb 13 20:43:36.464724 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 20:43:36.464823 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:43:35 UTC (1739479415)
Feb 13 20:43:36.464835 kernel: efifb: probing for efifb
Feb 13 20:43:36.464842 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k
Feb 13 20:43:36.464850 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Feb 13 20:43:36.464857 kernel: efifb: scrolling: redraw
Feb 13 20:43:36.464865 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Feb 13 20:43:36.464875 kernel: Console: switching to colour frame buffer device 128x48
Feb 13 20:43:36.464883 kernel: fb0: EFI VGA frame buffer device
Feb 13 20:43:36.464891 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping ....
Feb 13 20:43:36.464898 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 20:43:36.464905 kernel: No ACPI PMU IRQ for CPU0
Feb 13 20:43:36.464912 kernel: No ACPI PMU IRQ for CPU1
Feb 13 20:43:36.464920 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available
Feb 13 20:43:36.464928 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 20:43:36.464935 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 20:43:36.464944 kernel: NET: Registered PF_INET6 protocol family
Feb 13 20:43:36.464951 kernel: Segment Routing with IPv6
Feb 13 20:43:36.464959 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 20:43:36.464966 kernel: NET: Registered PF_PACKET protocol family
Feb 13 20:43:36.464973 kernel: Key type dns_resolver registered
Feb 13 20:43:36.464981 kernel: registered taskstats version 1
Feb 13 20:43:36.464988 kernel: Loading compiled-in X.509 certificates
Feb 13 20:43:36.464995 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec'
Feb 13 20:43:36.465003 kernel: Key type .fscrypt registered
Feb 13 20:43:36.465011 kernel: Key type fscrypt-provisioning registered
Feb 13 20:43:36.465019 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 20:43:36.465026 kernel: ima: Allocated hash algorithm: sha1
Feb 13 20:43:36.465033 kernel: ima: No architecture policies found
Feb 13 20:43:36.465041 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 20:43:36.465048 kernel: clk: Disabling unused clocks
Feb 13 20:43:36.465055 kernel: Freeing unused kernel memory: 39360K
Feb 13 20:43:36.465062 kernel: Run /init as init process
Feb 13 20:43:36.465070 kernel:   with arguments:
Feb 13 20:43:36.465079 kernel:     /init
Feb 13 20:43:36.465086 kernel:   with environment:
Feb 13 20:43:36.465101 kernel:     HOME=/
Feb 13 20:43:36.465109 kernel:     TERM=linux
Feb 13 20:43:36.465116 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 20:43:36.465126 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 20:43:36.465135 systemd[1]: Detected virtualization microsoft.
Feb 13 20:43:36.465143 systemd[1]: Detected architecture arm64.
Feb 13 20:43:36.465153 systemd[1]: Running in initrd.
Feb 13 20:43:36.465161 systemd[1]: No hostname configured, using default hostname.
Feb 13 20:43:36.465168 systemd[1]: Hostname set to <localhost>.
Feb 13 20:43:36.465176 systemd[1]: Initializing machine ID from random generator.
Feb 13 20:43:36.465184 systemd[1]: Queued start job for default target initrd.target.
Feb 13 20:43:36.465192 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 20:43:36.465200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 20:43:36.465209 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 20:43:36.465219 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 20:43:36.465227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 20:43:36.465235 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 20:43:36.465245 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 20:43:36.465253 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 20:43:36.465261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 20:43:36.465270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 20:43:36.465278 systemd[1]: Reached target paths.target - Path Units.
Feb 13 20:43:36.465286 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 20:43:36.465294 systemd[1]: Reached target swap.target - Swaps.
Feb 13 20:43:36.465302 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 20:43:36.465310 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 20:43:36.465318 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 20:43:36.465326 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 20:43:36.465334 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 20:43:36.465343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 20:43:36.465352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 20:43:36.465360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 20:43:36.465368 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 20:43:36.465375 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 20:43:36.465383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 20:43:36.465391 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 20:43:36.465399 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 20:43:36.465407 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 20:43:36.465416 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 20:43:36.465443 systemd-journald[217]: Collecting audit messages is disabled.
Feb 13 20:43:36.465462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 20:43:36.465471 systemd-journald[217]: Journal started
Feb 13 20:43:36.465491 systemd-journald[217]: Runtime Journal (/run/log/journal/22c9ca3924ee4e66b53ac5efa36a940e) is 8.0M, max 78.5M, 70.5M free.
Feb 13 20:43:36.463780 systemd-modules-load[218]: Inserted module 'overlay'
Feb 13 20:43:36.497507 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 20:43:36.497563 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 20:43:36.517700 kernel: Bridge firewalling registered
Feb 13 20:43:36.517750 systemd-modules-load[218]: Inserted module 'br_netfilter'
Feb 13 20:43:36.521292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 20:43:36.535351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 20:43:36.549821 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 20:43:36.563389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 20:43:36.575220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:36.599420 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 20:43:36.608257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 20:43:36.638821 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 20:43:36.677286 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 20:43:36.687879 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 20:43:36.712270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 20:43:36.720396 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 20:43:36.729193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 20:43:36.763420 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 20:43:36.779985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 20:43:36.797864 dracut-cmdline[250]: dracut-dracut-053
Feb 13 20:43:36.797864 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7
Feb 13 20:43:36.792311 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 20:43:36.869489 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 20:43:36.870120 systemd-resolved[260]: Positive Trust Anchors:
Feb 13 20:43:36.870131 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 20:43:36.870162 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 20:43:36.872478 systemd-resolved[260]: Defaulting to hostname 'linux'.
Feb 13 20:43:36.894823 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 20:43:36.904387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 20:43:37.027122 kernel: SCSI subsystem initialized
Feb 13 20:43:37.036132 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 20:43:37.047209 kernel: iscsi: registered transport (tcp)
Feb 13 20:43:37.066072 kernel: iscsi: registered transport (qla4xxx)
Feb 13 20:43:37.066106 kernel: QLogic iSCSI HBA Driver
Feb 13 20:43:37.106980 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 20:43:37.128265 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 20:43:37.161903 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 20:43:37.161950 kernel: device-mapper: uevent: version 1.0.3
Feb 13 20:43:37.169188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 20:43:37.221121 kernel: raid6: neonx8   gen() 15777 MB/s
Feb 13 20:43:37.241104 kernel: raid6: neonx4   gen() 15656 MB/s
Feb 13 20:43:37.261101 kernel: raid6: neonx2   gen() 13233 MB/s
Feb 13 20:43:37.282102 kernel: raid6: neonx1   gen() 10483 MB/s
Feb 13 20:43:37.302105 kernel: raid6: int64x8  gen()  6958 MB/s
Feb 13 20:43:37.322101 kernel: raid6: int64x4  gen()  7352 MB/s
Feb 13 20:43:37.343102 kernel: raid6: int64x2  gen()  6133 MB/s
Feb 13 20:43:37.367436 kernel: raid6: int64x1  gen()  5061 MB/s
Feb 13 20:43:37.367447 kernel: raid6: using algorithm neonx8 gen() 15777 MB/s
Feb 13 20:43:37.392154 kernel: raid6: .... xor() 11932 MB/s, rmw enabled
Feb 13 20:43:37.392177 kernel: raid6: using neon recovery algorithm
Feb 13 20:43:37.405493 kernel: xor: measuring software checksum speed
Feb 13 20:43:37.405510 kernel:    8regs           : 19836 MB/sec
Feb 13 20:43:37.410505 kernel:    32regs          : 19622 MB/sec
Feb 13 20:43:37.414740 kernel:    arm64_neon      : 26927 MB/sec
Feb 13 20:43:37.419365 kernel: xor: using function: arm64_neon (26927 MB/sec)
Feb 13 20:43:37.470115 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 20:43:37.481704 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 20:43:37.500255 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 20:43:37.524456 systemd-udevd[438]: Using default interface naming scheme 'v255'.
Feb 13 20:43:37.529922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 20:43:37.550226 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 20:43:37.575327 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation
Feb 13 20:43:37.610771 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 20:43:37.630322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 20:43:37.672342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 20:43:37.691285 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 20:43:37.716237 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 20:43:37.728313 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 20:43:37.746300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 20:43:37.765593 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 20:43:37.797247 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 20:43:37.828763 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 20:43:37.849464 kernel: hv_vmbus: Vmbus version:5.3
Feb 13 20:43:37.828940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 20:43:37.876193 kernel: hv_vmbus: registering driver hid_hyperv
Feb 13 20:43:37.848430 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 20:43:37.933363 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0
Feb 13 20:43:37.933388 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 13 20:43:37.933399 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on 
Feb 13 20:43:37.933555 kernel: hv_vmbus: registering driver hv_storvsc
Feb 13 20:43:37.933566 kernel: hv_vmbus: registering driver hyperv_keyboard
Feb 13 20:43:37.857397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 20:43:37.973009 kernel: scsi host0: storvsc_host_t
Feb 13 20:43:37.973184 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 13 20:43:37.973198 kernel: scsi host1: storvsc_host_t
Feb 13 20:43:37.973291 kernel: scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
Feb 13 20:43:37.973312 kernel: hv_vmbus: registering driver hv_netvsc
Feb 13 20:43:37.857573 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:37.992445 kernel: scsi 0:0:0:2: CD-ROM            Msft     Virtual DVD-ROM  1.0  PQ: 0 ANSI: 0
Feb 13 20:43:37.992489 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1
Feb 13 20:43:37.896770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 20:43:37.990913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 20:43:38.020599 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 20:43:38.058378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 20:43:38.086305 kernel: PTP clock support registered
Feb 13 20:43:38.086329 kernel: hv_netvsc 000d3ac4-5f98-000d-3ac4-5f98000d3ac4 eth0: VF slot 1 added
Feb 13 20:43:38.058488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:38.107767 kernel: sr 0:0:0:2: [sr0] scsi-1 drive
Feb 13 20:43:38.125685 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 13 20:43:38.125708 kernel: hv_utils: Registering HyperV Utility Driver
Feb 13 20:43:38.125720 kernel: hv_vmbus: registering driver hv_utils
Feb 13 20:43:38.125737 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0
Feb 13 20:43:38.107683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 20:43:38.143652 kernel: hv_vmbus: registering driver hv_pci
Feb 13 20:43:38.143676 kernel: hv_utils: Heartbeat IC version 3.0
Feb 13 20:43:38.143691 kernel: hv_utils: Shutdown IC version 3.2
Feb 13 20:43:38.149128 kernel: hv_utils: TimeSync IC version 4.0
Feb 13 20:43:37.652574 systemd-resolved[260]: Clock change detected. Flushing caches.
Feb 13 20:43:37.680817 kernel: hv_pci ab336f7c-24bc-4014-987a-76eb1b260486: PCI VMBus probing: Using version 0x10004
Feb 13 20:43:37.880544 systemd-journald[217]: Time jumped backwards, rotating.
Feb 13 20:43:37.880608 kernel: hv_pci ab336f7c-24bc-4014-987a-76eb1b260486: PCI host bridge to bus 24bc:00
Feb 13 20:43:37.880713 kernel: pci_bus 24bc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window]
Feb 13 20:43:37.880811 kernel: pci_bus 24bc:00: No busn resource found for root bus, will use [bus 00-ff]
Feb 13 20:43:37.880888 kernel: pci 24bc:00:02.0: [15b3:1018] type 00 class 0x020000
Feb 13 20:43:37.880988 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB)
Feb 13 20:43:37.883921 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks
Feb 13 20:43:37.884031 kernel: pci 24bc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref]
Feb 13 20:43:37.884123 kernel: sd 0:0:0:0: [sda] Write Protect is off
Feb 13 20:43:37.884207 kernel: pci 24bc:00:02.0: enabling Extended Tags
Feb 13 20:43:37.884289 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00
Feb 13 20:43:37.884407 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
Feb 13 20:43:37.884495 kernel: pci 24bc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 24bc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
Feb 13 20:43:37.884578 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 20:43:37.884588 kernel: pci_bus 24bc:00: busn_res: [bus 00-ff] end is updated to 00
Feb 13 20:43:37.884672 kernel: pci 24bc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref]
Feb 13 20:43:37.884754 kernel: sd 0:0:0:0: [sda] Attached SCSI disk
Feb 13 20:43:37.658938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:37.694622 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 20:43:37.820363 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 20:43:37.950875 kernel: mlx5_core 24bc:00:02.0: enabling device (0000 -> 0002)
Feb 13 20:43:38.271406 kernel: mlx5_core 24bc:00:02.0: firmware version: 16.30.1284
Feb 13 20:43:38.271543 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (489)
Feb 13 20:43:38.271554 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (494)
Feb 13 20:43:38.271564 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 20:43:38.271573 kernel: hv_netvsc 000d3ac4-5f98-000d-3ac4-5f98000d3ac4 eth0: VF registering: eth1
Feb 13 20:43:38.271667 kernel: mlx5_core 24bc:00:02.0 eth1: joined to eth0
Feb 13 20:43:38.271761 kernel: mlx5_core 24bc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic)
Feb 13 20:43:38.064894 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM.
Feb 13 20:43:38.111544 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Feb 13 20:43:38.130353 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT.
Feb 13 20:43:38.313725 kernel: mlx5_core 24bc:00:02.0 enP9404s1: renamed from eth1
Feb 13 20:43:38.143534 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A.
Feb 13 20:43:38.151884 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A.
Feb 13 20:43:38.165486 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 20:43:39.203373 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 20:43:39.203424 disk-uuid[603]: The operation has completed successfully.
Feb 13 20:43:39.265792 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 20:43:39.265902 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 20:43:39.300485 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 20:43:39.315648 sh[717]: Success
Feb 13 20:43:39.330391 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 20:43:39.406866 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 20:43:39.414492 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 20:43:39.429418 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 20:43:39.470122 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6
Feb 13 20:43:39.470202 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 20:43:39.478262 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 20:43:39.484921 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 20:43:39.489884 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 20:43:39.560048 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 20:43:39.566446 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 20:43:39.583595 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 20:43:39.610215 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:39.610284 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 20:43:39.604670 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 20:43:39.635351 kernel: BTRFS info (device sda6): using free space tree
Feb 13 20:43:39.635377 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 20:43:39.648645 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 20:43:39.654353 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:39.661603 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 20:43:39.678715 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 20:43:39.741935 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 20:43:39.763633 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 20:43:39.789923 systemd-networkd[901]: lo: Link UP
Feb 13 20:43:39.789936 systemd-networkd[901]: lo: Gained carrier
Feb 13 20:43:39.791570 systemd-networkd[901]: Enumeration completed
Feb 13 20:43:39.793979 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 20:43:39.794841 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 20:43:39.794845 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 20:43:39.801903 systemd[1]: Reached target network.target - Network.
Feb 13 20:43:39.898682 kernel: mlx5_core 24bc:00:02.0 enP9404s1: Link up
Feb 13 20:43:39.942028 kernel: hv_netvsc 000d3ac4-5f98-000d-3ac4-5f98000d3ac4 eth0: Data path switched to VF: enP9404s1
Feb 13 20:43:39.941729 systemd-networkd[901]: enP9404s1: Link UP
Feb 13 20:43:39.941804 systemd-networkd[901]: eth0: Link UP
Feb 13 20:43:39.941921 systemd-networkd[901]: eth0: Gained carrier
Feb 13 20:43:39.941929 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 20:43:39.967615 systemd-networkd[901]: enP9404s1: Gained carrier
Feb 13 20:43:39.980397 systemd-networkd[901]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16
Feb 13 20:43:39.980961 ignition[837]: Ignition 2.19.0
Feb 13 20:43:39.993313 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 20:43:39.980967 ignition[837]: Stage: fetch-offline
Feb 13 20:43:39.981001 ignition[837]: no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:39.981009 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:40.010675 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 20:43:39.981115 ignition[837]: parsed url from cmdline: ""
Feb 13 20:43:39.981119 ignition[837]: no config URL provided
Feb 13 20:43:39.981123 ignition[837]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 20:43:39.981130 ignition[837]: no config at "/usr/lib/ignition/user.ign"
Feb 13 20:43:39.981135 ignition[837]: failed to fetch config: resource requires networking
Feb 13 20:43:39.989524 ignition[837]: Ignition finished successfully
Feb 13 20:43:40.031998 ignition[911]: Ignition 2.19.0
Feb 13 20:43:40.032089 ignition[911]: Stage: fetch
Feb 13 20:43:40.032361 ignition[911]: no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:40.032371 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:40.032504 ignition[911]: parsed url from cmdline: ""
Feb 13 20:43:40.032508 ignition[911]: no config URL provided
Feb 13 20:43:40.032513 ignition[911]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 20:43:40.032524 ignition[911]: no config at "/usr/lib/ignition/user.ign"
Feb 13 20:43:40.032554 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1
Feb 13 20:43:40.124099 ignition[911]: GET result: OK
Feb 13 20:43:40.124200 ignition[911]: config has been read from IMDS userdata
Feb 13 20:43:40.124258 ignition[911]: parsing config with SHA512: a2fddc2306c822540e5eb199c4094a0b1a6c1c2c683989be3ccd73083cfda083f7badea6d005b91a3bbf8d591f065448d1bbdef24ae4b21fb5e551392d964f6f
Feb 13 20:43:40.128692 unknown[911]: fetched base config from "system"
Feb 13 20:43:40.129135 ignition[911]: fetch: fetch complete
Feb 13 20:43:40.128700 unknown[911]: fetched base config from "system"
Feb 13 20:43:40.129139 ignition[911]: fetch: fetch passed
Feb 13 20:43:40.128712 unknown[911]: fetched user config from "azure"
Feb 13 20:43:40.129180 ignition[911]: Ignition finished successfully
Feb 13 20:43:40.133488 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 20:43:40.150624 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 20:43:40.176059 ignition[918]: Ignition 2.19.0
Feb 13 20:43:40.185121 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 20:43:40.176066 ignition[918]: Stage: kargs
Feb 13 20:43:40.176355 ignition[918]: no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:40.176370 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:40.177415 ignition[918]: kargs: kargs passed
Feb 13 20:43:40.177466 ignition[918]: Ignition finished successfully
Feb 13 20:43:40.213687 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 20:43:40.233599 ignition[925]: Ignition 2.19.0
Feb 13 20:43:40.233607 ignition[925]: Stage: disks
Feb 13 20:43:40.236739 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 20:43:40.233818 ignition[925]: no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:40.243892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 20:43:40.233827 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:40.253868 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 20:43:40.235564 ignition[925]: disks: disks passed
Feb 13 20:43:40.266122 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 20:43:40.235624 ignition[925]: Ignition finished successfully
Feb 13 20:43:40.277287 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 20:43:40.289879 systemd[1]: Reached target basic.target - Basic System.
Feb 13 20:43:40.318579 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 20:43:40.353684 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 20:43:40.370153 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks
Feb 13 20:43:40.377632 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 20:43:40.437348 kernel: EXT4-fs (sda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none.
Feb 13 20:43:40.437745 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 20:43:40.442738 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 20:43:40.474519 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 20:43:40.483503 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 20:43:40.501761 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Feb 13 20:43:40.527513 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944)
Feb 13 20:43:40.527539 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:40.519788 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 20:43:40.572635 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 20:43:40.572661 kernel: BTRFS info (device sda6): using free space tree
Feb 13 20:43:40.519832 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 20:43:40.558025 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 20:43:40.600382 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 20:43:40.601527 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 20:43:40.610629 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 20:43:40.685325 coreos-metadata[946]: Feb 13 20:43:40.685 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Feb 13 20:43:40.697337 coreos-metadata[946]: Feb 13 20:43:40.697 INFO Fetch successful
Feb 13 20:43:40.697337 coreos-metadata[946]: Feb 13 20:43:40.697 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1
Feb 13 20:43:40.719359 coreos-metadata[946]: Feb 13 20:43:40.718 INFO Fetch successful
Feb 13 20:43:40.727434 coreos-metadata[946]: Feb 13 20:43:40.726 INFO wrote hostname ci-4081.3.1-a-d3f644b76a to /sysroot/etc/hostname
Feb 13 20:43:40.729218 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 20:43:40.789749 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 20:43:40.805111 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory
Feb 13 20:43:40.820979 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 20:43:40.833170 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 20:43:41.100454 systemd-networkd[901]: eth0: Gained IPv6LL
Feb 13 20:43:41.104601 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 20:43:41.124588 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 20:43:41.139639 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 20:43:41.162635 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:41.155629 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 20:43:41.186543 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 20:43:41.203351 ignition[1062]: INFO     : Ignition 2.19.0
Feb 13 20:43:41.203351 ignition[1062]: INFO     : Stage: mount
Feb 13 20:43:41.203351 ignition[1062]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:41.203351 ignition[1062]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:41.203351 ignition[1062]: INFO     : mount: mount passed
Feb 13 20:43:41.243410 ignition[1062]: INFO     : Ignition finished successfully
Feb 13 20:43:41.209863 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 20:43:41.237562 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 20:43:41.258573 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 20:43:41.300218 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1074)
Feb 13 20:43:41.300244 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1
Feb 13 20:43:41.307591 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 20:43:41.313267 kernel: BTRFS info (device sda6): using free space tree
Feb 13 20:43:41.321362 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 20:43:41.323414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 20:43:41.352356 ignition[1091]: INFO     : Ignition 2.19.0
Feb 13 20:43:41.352356 ignition[1091]: INFO     : Stage: files
Feb 13 20:43:41.362527 ignition[1091]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:41.362527 ignition[1091]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:41.362527 ignition[1091]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 20:43:41.362527 ignition[1091]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 20:43:41.362527 ignition[1091]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 20:43:41.401142 ignition[1091]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 20:43:41.401142 ignition[1091]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 20:43:41.401142 ignition[1091]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 20:43:41.400861 unknown[1091]: wrote ssh authorized keys file for user: core
Feb 13 20:43:41.436208 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 20:43:41.436208 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Feb 13 20:43:41.475153 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb 13 20:43:41.596778 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 20:43:41.608959 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1
Feb 13 20:43:41.746499 systemd-networkd[901]: enP9404s1: Gained IPv6LL
Feb 13 20:43:42.076849 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Feb 13 20:43:42.322600 ignition[1091]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 20:43:42.322600 ignition[1091]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(d): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: op(d): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: createResultFile: createFiles: op(e): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 20:43:42.344655 ignition[1091]: INFO     : files: files passed
Feb 13 20:43:42.344655 ignition[1091]: INFO     : Ignition finished successfully
Feb 13 20:43:42.342502 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 20:43:42.390957 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 20:43:42.401566 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 20:43:42.426383 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 20:43:42.499419 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 20:43:42.499419 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 20:43:42.426494 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 20:43:42.533673 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 20:43:42.448525 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 20:43:42.460466 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 20:43:42.492582 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 20:43:42.535722 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 20:43:42.535814 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 20:43:42.552836 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 20:43:42.568006 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 20:43:42.581706 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 20:43:42.604597 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 20:43:42.646373 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 20:43:42.662559 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 20:43:42.680278 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 20:43:42.687398 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 20:43:42.701119 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 20:43:42.714368 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 20:43:42.714493 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 20:43:42.731452 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 20:43:42.737640 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 20:43:42.749986 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 20:43:42.761746 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 20:43:42.772819 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 20:43:42.785094 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 20:43:42.797178 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 20:43:42.811796 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 20:43:42.824414 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 20:43:42.838786 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 20:43:42.850235 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 20:43:42.850371 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 20:43:42.867369 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 20:43:42.874389 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 20:43:42.887922 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 20:43:42.888008 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 20:43:42.901877 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 20:43:42.901999 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 20:43:42.922135 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 20:43:42.922263 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 20:43:42.930297 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 20:43:42.930406 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 20:43:42.942781 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Feb 13 20:43:42.942882 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 20:43:43.036095 ignition[1143]: INFO     : Ignition 2.19.0
Feb 13 20:43:43.036095 ignition[1143]: INFO     : Stage: umount
Feb 13 20:43:43.036095 ignition[1143]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 20:43:43.036095 ignition[1143]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 20:43:43.036095 ignition[1143]: INFO     : umount: umount passed
Feb 13 20:43:43.036095 ignition[1143]: INFO     : Ignition finished successfully
Feb 13 20:43:42.972664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 20:43:42.996720 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 20:43:43.006347 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 20:43:43.006517 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 20:43:43.022571 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 20:43:43.022693 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 20:43:43.050060 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 20:43:43.050784 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 20:43:43.050893 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 20:43:43.059154 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 20:43:43.059419 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 20:43:43.079948 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 20:43:43.080013 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 20:43:43.086478 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 20:43:43.086523 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 20:43:43.096899 systemd[1]: Stopped target network.target - Network.
Feb 13 20:43:43.106537 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 20:43:43.106602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 20:43:43.119149 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 20:43:43.135625 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 20:43:43.142834 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 20:43:43.151201 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 20:43:43.162815 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 20:43:43.174992 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 20:43:43.175120 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 20:43:43.185365 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 20:43:43.185445 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 20:43:43.195967 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 20:43:43.196019 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 20:43:43.207550 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 20:43:43.207591 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 20:43:43.219931 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 20:43:43.231920 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 20:43:43.244722 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 20:43:43.244807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 20:43:43.257377 systemd-networkd[901]: eth0: DHCPv6 lease lost
Feb 13 20:43:43.528493 kernel: hv_netvsc 000d3ac4-5f98-000d-3ac4-5f98000d3ac4 eth0: Data path switched from VF: enP9404s1
Feb 13 20:43:43.257857 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 20:43:43.257966 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 20:43:43.279040 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 20:43:43.281264 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 20:43:43.294213 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 20:43:43.294277 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 20:43:43.329843 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 20:43:43.343607 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 20:43:43.343687 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 20:43:43.357623 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 20:43:43.357687 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 20:43:43.370606 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 20:43:43.370659 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 20:43:43.382916 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 20:43:43.382964 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 20:43:43.396928 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 20:43:43.416187 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 20:43:43.416292 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 20:43:43.448785 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 20:43:43.448926 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 20:43:43.466124 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 20:43:43.466205 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 20:43:43.479084 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 20:43:43.479143 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 20:43:43.492786 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 20:43:43.492848 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 20:43:43.523231 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 20:43:43.523303 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 20:43:43.543066 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 20:43:43.845078 systemd-journald[217]: Received SIGTERM from PID 1 (systemd).
Feb 13 20:43:43.543137 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 20:43:43.565400 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 20:43:43.565457 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 20:43:43.604594 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 20:43:43.623537 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 20:43:43.623622 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 20:43:43.642569 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Feb 13 20:43:43.642642 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 20:43:43.658909 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 20:43:43.658961 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 20:43:43.680984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 20:43:43.681051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:43.696219 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 20:43:43.696340 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 20:43:43.709394 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 20:43:43.709478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 20:43:43.727256 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 20:43:43.760591 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 20:43:43.782480 systemd[1]: Switching root.
Feb 13 20:43:43.988405 systemd-journald[217]: Journal stopped
Feb 13 20:43:46.186740 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 20:43:46.186763 kernel: SELinux:  policy capability open_perms=1
Feb 13 20:43:46.186773 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 20:43:46.186782 kernel: SELinux:  policy capability always_check_network=0
Feb 13 20:43:46.186791 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 20:43:46.186799 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 20:43:46.186808 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 20:43:46.186816 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 20:43:46.186824 kernel: audit: type=1403 audit(1739479424.382:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 20:43:46.186833 systemd[1]: Successfully loaded SELinux policy in 85.518ms.
Feb 13 20:43:46.186844 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.462ms.
Feb 13 20:43:46.186854 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 20:43:46.186863 systemd[1]: Detected virtualization microsoft.
Feb 13 20:43:46.186871 systemd[1]: Detected architecture arm64.
Feb 13 20:43:46.186881 systemd[1]: Detected first boot.
Feb 13 20:43:46.186892 systemd[1]: Hostname set to <ci-4081.3.1-a-d3f644b76a>.
Feb 13 20:43:46.186901 systemd[1]: Initializing machine ID from random generator.
Feb 13 20:43:46.186910 zram_generator::config[1187]: No configuration found.
Feb 13 20:43:46.186920 systemd[1]: Populated /etc with preset unit settings.
Feb 13 20:43:46.186929 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 20:43:46.186938 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 20:43:46.186947 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 20:43:46.186958 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 20:43:46.186967 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 20:43:46.186979 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 20:43:46.186988 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 20:43:46.186997 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 20:43:46.187007 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 20:43:46.187016 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 20:43:46.187027 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 20:43:46.187036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 20:43:46.187046 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 20:43:46.187055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 20:43:46.187064 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 20:43:46.187073 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 20:43:46.187082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 20:43:46.187091 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Feb 13 20:43:46.187102 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 20:43:46.187111 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 20:43:46.187120 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 20:43:46.187132 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 20:43:46.187141 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 20:43:46.187151 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 20:43:46.187161 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 20:43:46.187170 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 20:43:46.187181 systemd[1]: Reached target swap.target - Swaps.
Feb 13 20:43:46.187191 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 20:43:46.187200 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 20:43:46.187210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 20:43:46.187219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 20:43:46.187229 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 20:43:46.187240 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 20:43:46.187249 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 20:43:46.187259 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 20:43:46.187269 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 20:43:46.187278 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 20:43:46.187288 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 20:43:46.187297 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 20:43:46.187308 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 20:43:46.187318 systemd[1]: Reached target machines.target - Containers.
Feb 13 20:43:46.187336 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 20:43:46.187346 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 20:43:46.187356 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 20:43:46.187366 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 20:43:46.187375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 20:43:46.187385 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 20:43:46.187397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 20:43:46.187407 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 20:43:46.187416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 20:43:46.187426 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 20:43:46.187436 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 20:43:46.187445 kernel: fuse: init (API version 7.39)
Feb 13 20:43:46.187454 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 20:43:46.187463 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 20:43:46.187475 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 20:43:46.187484 kernel: loop: module loaded
Feb 13 20:43:46.187492 kernel: ACPI: bus type drm_connector registered
Feb 13 20:43:46.187501 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 20:43:46.187511 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 20:43:46.187520 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 20:43:46.187544 systemd-journald[1290]: Collecting audit messages is disabled.
Feb 13 20:43:46.187566 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 20:43:46.187577 systemd-journald[1290]: Journal started
Feb 13 20:43:46.187596 systemd-journald[1290]: Runtime Journal (/run/log/journal/4a3fcb0abdc14d23bf4791f97d5aba93) is 8.0M, max 78.5M, 70.5M free.
Feb 13 20:43:45.247103 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 20:43:45.295714 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6.
Feb 13 20:43:45.296089 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 20:43:45.296418 systemd[1]: systemd-journald.service: Consumed 3.672s CPU time.
Feb 13 20:43:46.236301 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 20:43:46.236392 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 20:43:46.248346 systemd[1]: Stopped verity-setup.service.
Feb 13 20:43:46.267873 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 20:43:46.268772 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 20:43:46.276055 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 20:43:46.284103 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 20:43:46.291244 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 20:43:46.299070 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 20:43:46.307009 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 20:43:46.313704 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 20:43:46.323141 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 20:43:46.331985 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 20:43:46.332123 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 20:43:46.340227 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 20:43:46.340376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 20:43:46.348616 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 20:43:46.348750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 20:43:46.356457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 20:43:46.356593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 20:43:46.365463 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 20:43:46.365594 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 20:43:46.373172 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 20:43:46.373307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 20:43:46.380651 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 20:43:46.387914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 20:43:46.397945 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 20:43:46.407386 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 20:43:46.423921 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 20:43:46.436448 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 20:43:46.444527 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 20:43:46.451276 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 20:43:46.451321 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 20:43:46.458759 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 20:43:46.467657 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 20:43:46.476004 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 20:43:46.483034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 20:43:46.487668 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 20:43:46.496544 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 20:43:46.505388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 20:43:46.506566 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 20:43:46.515827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 20:43:46.518555 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 20:43:46.531188 systemd-journald[1290]: Time spent on flushing to /var/log/journal/4a3fcb0abdc14d23bf4791f97d5aba93 is 104.303ms for 895 entries.
Feb 13 20:43:46.531188 systemd-journald[1290]: System Journal (/var/log/journal/4a3fcb0abdc14d23bf4791f97d5aba93) is 11.8M, max 2.6G, 2.6G free.
Feb 13 20:43:46.708275 systemd-journald[1290]: Received client request to flush runtime journal.
Feb 13 20:43:46.708310 kernel: loop0: detected capacity change from 0 to 189592
Feb 13 20:43:46.708349 systemd-journald[1290]: /var/log/journal/4a3fcb0abdc14d23bf4791f97d5aba93/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating.
Feb 13 20:43:46.708371 systemd-journald[1290]: Rotating system journal.
Feb 13 20:43:46.708388 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 20:43:46.708400 kernel: loop1: detected capacity change from 0 to 31320
Feb 13 20:43:46.542701 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 20:43:46.552717 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 20:43:46.579445 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 20:43:46.608304 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 20:43:46.635041 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 20:43:46.647615 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 20:43:46.663535 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 20:43:46.673677 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 20:43:46.685726 systemd-tmpfiles[1323]: ACLs are not supported, ignoring.
Feb 13 20:43:46.685737 systemd-tmpfiles[1323]: ACLs are not supported, ignoring.
Feb 13 20:43:46.698243 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 20:43:46.707959 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 20:43:46.720607 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 20:43:46.741770 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 20:43:46.750735 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 20:43:46.765002 udevadm[1324]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 13 20:43:46.793267 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 20:43:46.793922 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 20:43:46.826758 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 20:43:46.844400 kernel: loop2: detected capacity change from 0 to 114432
Feb 13 20:43:46.845696 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 20:43:46.870786 systemd-tmpfiles[1346]: ACLs are not supported, ignoring.
Feb 13 20:43:46.870811 systemd-tmpfiles[1346]: ACLs are not supported, ignoring.
Feb 13 20:43:46.875505 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 20:43:46.951357 kernel: loop3: detected capacity change from 0 to 114328
Feb 13 20:43:47.035356 kernel: loop4: detected capacity change from 0 to 189592
Feb 13 20:43:47.047649 kernel: loop5: detected capacity change from 0 to 31320
Feb 13 20:43:47.058937 kernel: loop6: detected capacity change from 0 to 114432
Feb 13 20:43:47.069389 kernel: loop7: detected capacity change from 0 to 114328
Feb 13 20:43:47.072215 (sd-merge)[1351]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'.
Feb 13 20:43:47.072717 (sd-merge)[1351]: Merged extensions into '/usr'.
Feb 13 20:43:47.081180 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 20:43:47.081416 systemd[1]: Reloading...
Feb 13 20:43:47.169364 zram_generator::config[1374]: No configuration found.
Feb 13 20:43:47.323812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 20:43:47.396092 systemd[1]: Reloading finished in 314 ms.
Feb 13 20:43:47.427052 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 20:43:47.437374 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 20:43:47.451554 systemd[1]: Starting ensure-sysext.service...
Feb 13 20:43:47.457086 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 20:43:47.479577 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 20:43:47.490304 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 20:43:47.491048 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 20:43:47.491881 systemd-tmpfiles[1434]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 20:43:47.492115 systemd-tmpfiles[1434]: ACLs are not supported, ignoring.
Feb 13 20:43:47.492158 systemd-tmpfiles[1434]: ACLs are not supported, ignoring.
Feb 13 20:43:47.494619 systemd[1]: Reloading requested from client PID 1433 ('systemctl') (unit ensure-sysext.service)...
Feb 13 20:43:47.494637 systemd[1]: Reloading...
Feb 13 20:43:47.512074 systemd-udevd[1435]: Using default interface naming scheme 'v255'.
Feb 13 20:43:47.553419 systemd-tmpfiles[1434]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 20:43:47.553432 systemd-tmpfiles[1434]: Skipping /boot
Feb 13 20:43:47.579917 systemd-tmpfiles[1434]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 20:43:47.579937 systemd-tmpfiles[1434]: Skipping /boot
Feb 13 20:43:47.601774 zram_generator::config[1470]: No configuration found.
Feb 13 20:43:47.786956 kernel: mousedev: PS/2 mouse device common for all mice
Feb 13 20:43:47.824480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 20:43:47.880807 kernel: hv_vmbus: registering driver hv_balloon
Feb 13 20:43:47.881258 kernel: hv_vmbus: registering driver hyperv_fb
Feb 13 20:43:47.881306 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0
Feb 13 20:43:47.900541 kernel: hyperv_fb: Synthvid Version major 3, minor 5
Feb 13 20:43:47.900654 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608
Feb 13 20:43:47.900677 kernel: hv_balloon: Memory hot add disabled on ARM64
Feb 13 20:43:47.919036 kernel: Console: switching to colour dummy device 80x25
Feb 13 20:43:47.924372 kernel: Console: switching to colour frame buffer device 128x48
Feb 13 20:43:47.963906 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Feb 13 20:43:47.964205 systemd[1]: Reloading finished in 469 ms.
Feb 13 20:43:47.984399 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1461)
Feb 13 20:43:47.994248 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 20:43:48.019956 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 20:43:48.080896 systemd[1]: Finished ensure-sysext.service.
Feb 13 20:43:48.094246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Feb 13 20:43:48.103279 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 20:43:48.117736 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Feb 13 20:43:48.119028 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 20:43:48.126109 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 20:43:48.137433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 20:43:48.141659 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 20:43:48.158894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 20:43:48.169841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 20:43:48.179563 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 20:43:48.192412 lvm[1600]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 20:43:48.194117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 20:43:48.205845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 20:43:48.215233 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 20:43:48.229952 augenrules[1619]: No rules
Feb 13 20:43:48.233558 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 20:43:48.247021 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 20:43:48.267551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 20:43:48.273887 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 20:43:48.281592 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 20:43:48.292038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 20:43:48.301498 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 20:43:48.309125 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Feb 13 20:43:48.316372 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 20:43:48.325131 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 20:43:48.333552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 20:43:48.333694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 20:43:48.341146 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 20:43:48.341280 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 20:43:48.348471 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 20:43:48.348613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 20:43:48.356210 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 20:43:48.356375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 20:43:48.363657 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 20:43:48.372858 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 20:43:48.390417 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 20:43:48.404597 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 20:43:48.409192 lvm[1642]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 20:43:48.415340 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 20:43:48.415506 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 20:43:48.418684 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 20:43:48.429661 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 20:43:48.441635 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 20:43:48.444384 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 20:43:48.453367 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 20:43:48.463528 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 20:43:48.485050 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 20:43:48.553603 systemd-resolved[1627]: Positive Trust Anchors:
Feb 13 20:43:48.553621 systemd-resolved[1627]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 20:43:48.553655 systemd-resolved[1627]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 20:43:48.553738 systemd-networkd[1626]: lo: Link UP
Feb 13 20:43:48.553742 systemd-networkd[1626]: lo: Gained carrier
Feb 13 20:43:48.555732 systemd-networkd[1626]: Enumeration completed
Feb 13 20:43:48.555856 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 20:43:48.557751 systemd-resolved[1627]: Using system hostname 'ci-4081.3.1-a-d3f644b76a'.
Feb 13 20:43:48.563394 systemd-networkd[1626]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 20:43:48.563403 systemd-networkd[1626]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 20:43:48.564244 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 20:43:48.571594 systemd[1]: Reached target network.target - Network.
Feb 13 20:43:48.577589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 20:43:48.592633 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 20:43:48.600507 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 20:43:48.609203 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 20:43:48.617006 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 20:43:48.627148 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 20:43:48.635361 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 20:43:48.642937 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 20:43:48.651676 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 20:43:48.661634 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 20:43:48.661673 systemd[1]: Reached target paths.target - Path Units.
Feb 13 20:43:48.668449 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 20:43:48.675958 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 20:43:48.685264 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 20:43:48.697053 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 20:43:48.704800 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 20:43:48.712170 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 20:43:48.719310 systemd[1]: Reached target basic.target - Basic System.
Feb 13 20:43:48.725434 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 20:43:48.725464 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 20:43:48.730372 kernel: mlx5_core 24bc:00:02.0 enP9404s1: Link up
Feb 13 20:43:48.740459 systemd[1]: Starting chronyd.service - NTP client/server...
Feb 13 20:43:48.750526 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 20:43:48.760365 kernel: hv_netvsc 000d3ac4-5f98-000d-3ac4-5f98000d3ac4 eth0: Data path switched to VF: enP9404s1
Feb 13 20:43:48.776812 systemd-networkd[1626]: enP9404s1: Link UP
Feb 13 20:43:48.777034 systemd-networkd[1626]: eth0: Link UP
Feb 13 20:43:48.777394 systemd-networkd[1626]: eth0: Gained carrier
Feb 13 20:43:48.777486 systemd-networkd[1626]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 20:43:48.781725 systemd-networkd[1626]: enP9404s1: Gained carrier
Feb 13 20:43:48.784991 (chronyd)[1661]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS
Feb 13 20:43:48.788692 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 20:43:48.791430 systemd-networkd[1626]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16
Feb 13 20:43:48.799089 chronyd[1665]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
Feb 13 20:43:48.801491 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 20:43:48.811101 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 20:43:48.819253 chronyd[1665]: Timezone right/UTC failed leap second check, ignoring
Feb 13 20:43:48.819543 chronyd[1665]: Loaded seccomp filter (level 2)
Feb 13 20:43:48.824747 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 20:43:48.833936 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 20:43:48.834126 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy).
Feb 13 20:43:48.834432 jq[1669]: false
Feb 13 20:43:48.837585 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon.
Feb 13 20:43:48.850700 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss).
Feb 13 20:43:48.850972 KVP[1671]: KVP starting; pid is:1671
Feb 13 20:43:48.858548 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 20:43:48.859209 extend-filesystems[1670]: Found loop4
Feb 13 20:43:48.878499 kernel: hv_utils: KVP IC version 4.0
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found loop5
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found loop6
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found loop7
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found sda
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found sda1
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found sda2
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found sda3
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found usr
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found sda4
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found sda6
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found sda7
Feb 13 20:43:48.878531 extend-filesystems[1670]: Found sda9
Feb 13 20:43:48.878531 extend-filesystems[1670]: Checking size of /dev/sda9
Feb 13 20:43:49.094526 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1483)
Feb 13 20:43:48.876374 KVP[1671]: KVP LIC Version: 3.1
Feb 13 20:43:49.094634 coreos-metadata[1663]: Feb 13 20:43:48.929 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Feb 13 20:43:49.094634 coreos-metadata[1663]: Feb 13 20:43:48.941 INFO Fetch successful
Feb 13 20:43:49.094634 coreos-metadata[1663]: Feb 13 20:43:48.941 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1
Feb 13 20:43:49.094634 coreos-metadata[1663]: Feb 13 20:43:48.946 INFO Fetch successful
Feb 13 20:43:49.094634 coreos-metadata[1663]: Feb 13 20:43:48.947 INFO Fetching http://168.63.129.16/machine/abb07264-f946-415c-8432-335913d05697/1e5ba5bd%2D1b19%2D40b0%2Db57c%2Dc8c4a8ef7b62.%5Fci%2D4081.3.1%2Da%2Dd3f644b76a?comp=config&type=sharedConfig&incarnation=1: Attempt #1
Feb 13 20:43:49.094634 coreos-metadata[1663]: Feb 13 20:43:48.962 INFO Fetch successful
Feb 13 20:43:49.094634 coreos-metadata[1663]: Feb 13 20:43:48.963 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1
Feb 13 20:43:49.094634 coreos-metadata[1663]: Feb 13 20:43:48.979 INFO Fetch successful
Feb 13 20:43:49.094960 extend-filesystems[1670]: Old size kept for /dev/sda9
Feb 13 20:43:49.094960 extend-filesystems[1670]: Found sr0
Feb 13 20:43:48.888679 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Feb 13 20:43:48.878316 dbus-daemon[1666]: [system] SELinux support is enabled
Feb 13 20:43:48.929543 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 20:43:48.947534 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 20:43:48.970579 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 20:43:48.980119 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 20:43:49.141399 update_engine[1703]: I20250213 20:43:49.062040  1703 main.cc:92] Flatcar Update Engine starting
Feb 13 20:43:49.141399 update_engine[1703]: I20250213 20:43:49.068490  1703 update_check_scheduler.cc:74] Next update check in 10m7s
Feb 13 20:43:48.980681 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 20:43:49.141692 jq[1714]: true
Feb 13 20:43:48.987490 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 20:43:49.008500 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 20:43:49.029189 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 20:43:49.060465 systemd[1]: Started chronyd.service - NTP client/server.
Feb 13 20:43:49.100666 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 20:43:49.100835 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 20:43:49.101100 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 20:43:49.101234 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 20:43:49.117604 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 20:43:49.117796 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 20:43:49.132778 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 20:43:49.132954 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 20:43:49.145443 systemd-logind[1698]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 13 20:43:49.146075 systemd-logind[1698]: New seat seat0.
Feb 13 20:43:49.148417 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 20:43:49.181152 jq[1725]: true
Feb 13 20:43:49.182706 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 20:43:49.198934 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 20:43:49.214689 dbus-daemon[1666]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 13 20:43:49.219968 tar[1724]: linux-arm64/helm
Feb 13 20:43:49.234173 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 20:43:49.249944 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 20:43:49.250169 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 20:43:49.250298 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 20:43:49.258462 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 20:43:49.258581 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 20:43:49.274704 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 20:43:49.304528 bash[1756]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 20:43:49.306487 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 20:43:49.318303 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Feb 13 20:43:49.390922 locksmithd[1757]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 20:43:49.544584 containerd[1726]: time="2025-02-13T20:43:49.544477800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21
Feb 13 20:43:49.552236 sshd_keygen[1706]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 20:43:49.595705 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 20:43:49.602846 containerd[1726]: time="2025-02-13T20:43:49.602594520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 20:43:49.604290 containerd[1726]: time="2025-02-13T20:43:49.604087840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 20:43:49.604290 containerd[1726]: time="2025-02-13T20:43:49.604145320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 20:43:49.604290 containerd[1726]: time="2025-02-13T20:43:49.604173400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 20:43:49.604490 containerd[1726]: time="2025-02-13T20:43:49.604429920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 20:43:49.604490 containerd[1726]: time="2025-02-13T20:43:49.604452800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 20:43:49.604690 containerd[1726]: time="2025-02-13T20:43:49.604529840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 20:43:49.604690 containerd[1726]: time="2025-02-13T20:43:49.604552520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 20:43:49.604862 containerd[1726]: time="2025-02-13T20:43:49.604739360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 20:43:49.604862 containerd[1726]: time="2025-02-13T20:43:49.604764960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 20:43:49.604862 containerd[1726]: time="2025-02-13T20:43:49.604778560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 20:43:49.604862 containerd[1726]: time="2025-02-13T20:43:49.604788560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 20:43:49.604947 containerd[1726]: time="2025-02-13T20:43:49.604867960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 20:43:49.605156 containerd[1726]: time="2025-02-13T20:43:49.605052240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 20:43:49.605188 containerd[1726]: time="2025-02-13T20:43:49.605174160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 20:43:49.605215 containerd[1726]: time="2025-02-13T20:43:49.605190120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 20:43:49.605401 containerd[1726]: time="2025-02-13T20:43:49.605265080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 20:43:49.605401 containerd[1726]: time="2025-02-13T20:43:49.605314640Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 20:43:49.614590 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 20:43:49.627684 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 20:43:49.627914 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 20:43:49.642788 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.652291120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.652407520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.652506680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.652526600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.652541320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.652716200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.653011040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.653128200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.653145440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.653159120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.653178400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.653191040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.653203680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 20:43:49.654001 containerd[1726]: time="2025-02-13T20:43:49.653217520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653232760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653245800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653269440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653282440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653302800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653316560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653360400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653377880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653392440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653406040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653418440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653431440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653444680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654325 containerd[1726]: time="2025-02-13T20:43:49.653459200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653476240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653493480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653506160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653522000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653542840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653554680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653566200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653613880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653631960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653644120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653656240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653665640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653678600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 20:43:49.654584 containerd[1726]: time="2025-02-13T20:43:49.653847000Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 20:43:49.656715 containerd[1726]: time="2025-02-13T20:43:49.653888080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 20:43:49.656989 containerd[1726]: time="2025-02-13T20:43:49.656272480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 20:43:49.656989 containerd[1726]: time="2025-02-13T20:43:49.656394880Z" level=info msg="Connect containerd service"
Feb 13 20:43:49.656989 containerd[1726]: time="2025-02-13T20:43:49.656438920Z" level=info msg="using legacy CRI server"
Feb 13 20:43:49.656989 containerd[1726]: time="2025-02-13T20:43:49.656449200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 20:43:49.656989 containerd[1726]: time="2025-02-13T20:43:49.656598800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 20:43:49.658065 containerd[1726]: time="2025-02-13T20:43:49.657258680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 20:43:49.659754 containerd[1726]: time="2025-02-13T20:43:49.659613400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 20:43:49.659754 containerd[1726]: time="2025-02-13T20:43:49.659665040Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 20:43:49.659754 containerd[1726]: time="2025-02-13T20:43:49.659716240Z" level=info msg="Start subscribing containerd event"
Feb 13 20:43:49.659754 containerd[1726]: time="2025-02-13T20:43:49.659752200Z" level=info msg="Start recovering state"
Feb 13 20:43:49.665521 containerd[1726]: time="2025-02-13T20:43:49.659820000Z" level=info msg="Start event monitor"
Feb 13 20:43:49.665521 containerd[1726]: time="2025-02-13T20:43:49.659831400Z" level=info msg="Start snapshots syncer"
Feb 13 20:43:49.665521 containerd[1726]: time="2025-02-13T20:43:49.659840680Z" level=info msg="Start cni network conf syncer for default"
Feb 13 20:43:49.665521 containerd[1726]: time="2025-02-13T20:43:49.659850520Z" level=info msg="Start streaming server"
Feb 13 20:43:49.660009 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 20:43:49.667962 containerd[1726]: time="2025-02-13T20:43:49.667916360Z" level=info msg="containerd successfully booted in 0.127437s"
Feb 13 20:43:49.675806 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 20:43:49.693838 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 20:43:49.706801 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Feb 13 20:43:49.715852 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 20:43:49.741325 tar[1724]: linux-arm64/LICENSE
Feb 13 20:43:49.741582 tar[1724]: linux-arm64/README.md
Feb 13 20:43:49.752471 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Feb 13 20:43:50.380483 systemd-networkd[1626]: enP9404s1: Gained IPv6LL
Feb 13 20:43:50.828511 systemd-networkd[1626]: eth0: Gained IPv6LL
Feb 13 20:43:50.831439 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 20:43:50.839147 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 20:43:50.852527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:43:50.860670 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 20:43:50.868138 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent...
Feb 13 20:43:50.908963 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent.
Feb 13 20:43:50.917435 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 20:43:51.378545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:43:51.379268 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 20:43:51.385986 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 20:43:51.397419 systemd[1]: Startup finished in 751ms (kernel) + 9.005s (initrd) + 7.098s (userspace) = 16.856s.
Feb 13 20:43:51.501958 login[1788]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:43:51.512939 login[1789]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:43:51.525557 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 20:43:51.530520 systemd-logind[1698]: New session 1 of user core.
Feb 13 20:43:51.591985 waagent[1806]: 2025-02-13T20:43:51.531180Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1
Feb 13 20:43:51.591985 waagent[1806]: 2025-02-13T20:43:51.539561Z INFO Daemon Daemon OS: flatcar 4081.3.1
Feb 13 20:43:51.591985 waagent[1806]: 2025-02-13T20:43:51.544974Z INFO Daemon Daemon Python: 3.11.9
Feb 13 20:43:51.591985 waagent[1806]: 2025-02-13T20:43:51.550685Z INFO Daemon Daemon Run daemon
Feb 13 20:43:51.591985 waagent[1806]: 2025-02-13T20:43:51.555721Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.1'
Feb 13 20:43:51.591985 waagent[1806]: 2025-02-13T20:43:51.567418Z INFO Daemon Daemon Using waagent for provisioning
Feb 13 20:43:51.591985 waagent[1806]: 2025-02-13T20:43:51.574128Z INFO Daemon Daemon Activate resource disk
Feb 13 20:43:51.591985 waagent[1806]: 2025-02-13T20:43:51.580210Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb
Feb 13 20:43:51.591117 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 20:43:51.599767 waagent[1806]: 2025-02-13T20:43:51.598109Z INFO Daemon Daemon Found device: None
Feb 13 20:43:51.606350 waagent[1806]: 2025-02-13T20:43:51.605870Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology
Feb 13 20:43:51.606014 systemd-logind[1698]: New session 2 of user core.
Feb 13 20:43:51.619013 waagent[1806]: 2025-02-13T20:43:51.618639Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0
Feb 13 20:43:51.620056 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 20:43:51.634996 waagent[1806]: 2025-02-13T20:43:51.634298Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Feb 13 20:43:51.636687 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 20:43:51.643970 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 20:43:51.646799 waagent[1806]: 2025-02-13T20:43:51.645941Z INFO Daemon Daemon Running default provisioning handler
Feb 13 20:43:51.666627 waagent[1806]: 2025-02-13T20:43:51.665950Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4.
Feb 13 20:43:51.691361 waagent[1806]: 2025-02-13T20:43:51.690464Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service'
Feb 13 20:43:51.702074 waagent[1806]: 2025-02-13T20:43:51.701907Z INFO Daemon Daemon cloud-init is enabled: False
Feb 13 20:43:51.709137 waagent[1806]: 2025-02-13T20:43:51.708620Z INFO Daemon Daemon Copying ovf-env.xml
Feb 13 20:43:51.754423 waagent[1806]: 2025-02-13T20:43:51.754295Z INFO Daemon Daemon Successfully mounted dvd
Feb 13 20:43:51.795259 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully.
Feb 13 20:43:51.802365 waagent[1806]: 2025-02-13T20:43:51.802164Z INFO Daemon Daemon Detect protocol endpoint
Feb 13 20:43:51.821147 waagent[1806]: 2025-02-13T20:43:51.820661Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Feb 13 20:43:51.827983 waagent[1806]: 2025-02-13T20:43:51.827899Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler
Feb 13 20:43:51.835047 waagent[1806]: 2025-02-13T20:43:51.834969Z INFO Daemon Daemon Test for route to 168.63.129.16
Feb 13 20:43:51.841552 waagent[1806]: 2025-02-13T20:43:51.841477Z INFO Daemon Daemon Route to 168.63.129.16 exists
Feb 13 20:43:51.846445 systemd[1829]: Queued start job for default target default.target.
Feb 13 20:43:51.847860 waagent[1806]: 2025-02-13T20:43:51.847739Z INFO Daemon Daemon Wire server endpoint:168.63.129.16
Feb 13 20:43:51.859461 systemd[1829]: Created slice app.slice - User Application Slice.
Feb 13 20:43:51.859491 systemd[1829]: Reached target paths.target - Paths.
Feb 13 20:43:51.859504 systemd[1829]: Reached target timers.target - Timers.
Feb 13 20:43:51.861272 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 20:43:51.874591 waagent[1806]: 2025-02-13T20:43:51.874238Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05
Feb 13 20:43:51.889080 waagent[1806]: 2025-02-13T20:43:51.888959Z INFO Daemon Daemon Wire protocol version:2012-11-30
Feb 13 20:43:51.898040 waagent[1806]: 2025-02-13T20:43:51.897788Z INFO Daemon Daemon Server preferred version:2015-04-05
Feb 13 20:43:51.906416 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 20:43:51.906535 systemd[1829]: Reached target sockets.target - Sockets.
Feb 13 20:43:51.906549 systemd[1829]: Reached target basic.target - Basic System.
Feb 13 20:43:51.906593 systemd[1829]: Reached target default.target - Main User Target.
Feb 13 20:43:51.906619 systemd[1829]: Startup finished in 255ms.
Feb 13 20:43:51.907048 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 20:43:51.918562 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 20:43:51.919430 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 20:43:52.009364 kubelet[1816]: E0213 20:43:52.008583    1816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 20:43:52.012138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 20:43:52.012292 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 20:43:52.054345 waagent[1806]: 2025-02-13T20:43:52.054236Z INFO Daemon Daemon Initializing goal state during protocol detection
Feb 13 20:43:52.064360 waagent[1806]: 2025-02-13T20:43:52.062518Z INFO Daemon Daemon Forcing an update of the goal state.
Feb 13 20:43:52.073337 waagent[1806]: 2025-02-13T20:43:52.073271Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1]
Feb 13 20:43:52.093783 waagent[1806]: 2025-02-13T20:43:52.093728Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159
Feb 13 20:43:52.100258 waagent[1806]: 2025-02-13T20:43:52.100203Z INFO Daemon
Feb 13 20:43:52.103245 waagent[1806]: 2025-02-13T20:43:52.103192Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2f4c41af-3a36-40c0-b99a-a32c7e8f06c4 eTag: 9099091925299873109 source: Fabric]
Feb 13 20:43:52.115317 waagent[1806]: 2025-02-13T20:43:52.115261Z INFO Daemon The vmSettings originated via Fabric; will ignore them.
Feb 13 20:43:52.123080 waagent[1806]: 2025-02-13T20:43:52.123023Z INFO Daemon
Feb 13 20:43:52.126128 waagent[1806]: 2025-02-13T20:43:52.126073Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1]
Feb 13 20:43:52.140152 waagent[1806]: 2025-02-13T20:43:52.140075Z INFO Daemon Daemon Downloading artifacts profile blob
Feb 13 20:43:52.229382 waagent[1806]: 2025-02-13T20:43:52.228857Z INFO Daemon Downloaded certificate {'thumbprint': 'E2EA3EB2308F2BAAB33243FF35632FAA857A8630', 'hasPrivateKey': False}
Feb 13 20:43:52.240777 waagent[1806]: 2025-02-13T20:43:52.240721Z INFO Daemon Downloaded certificate {'thumbprint': '6C025B1A407FB29AC5BC041843B8AE3B87F271ED', 'hasPrivateKey': True}
Feb 13 20:43:52.251664 waagent[1806]: 2025-02-13T20:43:52.251608Z INFO Daemon Fetch goal state completed
Feb 13 20:43:52.263892 waagent[1806]: 2025-02-13T20:43:52.263823Z INFO Daemon Daemon Starting provisioning
Feb 13 20:43:52.269452 waagent[1806]: 2025-02-13T20:43:52.269382Z INFO Daemon Daemon Handle ovf-env.xml.
Feb 13 20:43:52.274895 waagent[1806]: 2025-02-13T20:43:52.274838Z INFO Daemon Daemon Set hostname [ci-4081.3.1-a-d3f644b76a]
Feb 13 20:43:52.286505 waagent[1806]: 2025-02-13T20:43:52.286433Z INFO Daemon Daemon Publish hostname [ci-4081.3.1-a-d3f644b76a]
Feb 13 20:43:52.293578 waagent[1806]: 2025-02-13T20:43:52.293506Z INFO Daemon Daemon Examine /proc/net/route for primary interface
Feb 13 20:43:52.300772 waagent[1806]: 2025-02-13T20:43:52.300709Z INFO Daemon Daemon Primary interface is [eth0]
Feb 13 20:43:52.322451 systemd-networkd[1626]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 20:43:52.322459 systemd-networkd[1626]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 20:43:52.322480 systemd-networkd[1626]: eth0: DHCP lease lost
Feb 13 20:43:52.328384 waagent[1806]: 2025-02-13T20:43:52.324067Z INFO Daemon Daemon Create user account if not exists
Feb 13 20:43:52.330782 waagent[1806]: 2025-02-13T20:43:52.330714Z INFO Daemon Daemon User core already exists, skip useradd
Feb 13 20:43:52.337559 waagent[1806]: 2025-02-13T20:43:52.337491Z INFO Daemon Daemon Configure sudoer
Feb 13 20:43:52.342242 waagent[1806]: 2025-02-13T20:43:52.342172Z INFO Daemon Daemon Configure sshd
Feb 13 20:43:52.346588 waagent[1806]: 2025-02-13T20:43:52.346522Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive.
Feb 13 20:43:52.360688 waagent[1806]: 2025-02-13T20:43:52.360611Z INFO Daemon Daemon Deploy ssh public key.
Feb 13 20:43:52.367446 systemd-networkd[1626]: eth0: DHCPv6 lease lost
Feb 13 20:43:52.375404 systemd-networkd[1626]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16
Feb 13 20:43:53.452464 waagent[1806]: 2025-02-13T20:43:53.452037Z INFO Daemon Daemon Provisioning complete
Feb 13 20:43:53.472437 waagent[1806]: 2025-02-13T20:43:53.472378Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping
Feb 13 20:43:53.479242 waagent[1806]: 2025-02-13T20:43:53.479173Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions.
Feb 13 20:43:53.489886 waagent[1806]: 2025-02-13T20:43:53.489824Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent
Feb 13 20:43:53.632410 waagent[1886]: 2025-02-13T20:43:53.632305Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1)
Feb 13 20:43:53.633376 waagent[1886]: 2025-02-13T20:43:53.632826Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.1
Feb 13 20:43:53.633376 waagent[1886]: 2025-02-13T20:43:53.632907Z INFO ExtHandler ExtHandler Python: 3.11.9
Feb 13 20:43:53.643557 waagent[1886]: 2025-02-13T20:43:53.643458Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1;
Feb 13 20:43:53.643767 waagent[1886]: 2025-02-13T20:43:53.643724Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb 13 20:43:53.643834 waagent[1886]: 2025-02-13T20:43:53.643802Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16
Feb 13 20:43:53.654009 waagent[1886]: 2025-02-13T20:43:53.653911Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1]
Feb 13 20:43:53.660927 waagent[1886]: 2025-02-13T20:43:53.660868Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159
Feb 13 20:43:53.661553 waagent[1886]: 2025-02-13T20:43:53.661503Z INFO ExtHandler
Feb 13 20:43:53.661637 waagent[1886]: 2025-02-13T20:43:53.661604Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a17668b5-2c80-4617-ad24-bbca1fd71967 eTag: 9099091925299873109 source: Fabric]
Feb 13 20:43:53.661958 waagent[1886]: 2025-02-13T20:43:53.661917Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them.
Feb 13 20:43:53.662590 waagent[1886]: 2025-02-13T20:43:53.662538Z INFO ExtHandler
Feb 13 20:43:53.662663 waagent[1886]: 2025-02-13T20:43:53.662631Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1]
Feb 13 20:43:53.667165 waagent[1886]: 2025-02-13T20:43:53.667118Z INFO ExtHandler ExtHandler Downloading artifacts profile blob
Feb 13 20:43:53.745502 waagent[1886]: 2025-02-13T20:43:53.745325Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E2EA3EB2308F2BAAB33243FF35632FAA857A8630', 'hasPrivateKey': False}
Feb 13 20:43:53.745915 waagent[1886]: 2025-02-13T20:43:53.745864Z INFO ExtHandler Downloaded certificate {'thumbprint': '6C025B1A407FB29AC5BC041843B8AE3B87F271ED', 'hasPrivateKey': True}
Feb 13 20:43:53.746401 waagent[1886]: 2025-02-13T20:43:53.746298Z INFO ExtHandler Fetch goal state completed
Feb 13 20:43:53.764743 waagent[1886]: 2025-02-13T20:43:53.764670Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1886
Feb 13 20:43:53.764917 waagent[1886]: 2025-02-13T20:43:53.764879Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ********
Feb 13 20:43:53.766716 waagent[1886]: 2025-02-13T20:43:53.766659Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.1', '', 'Flatcar Container Linux by Kinvolk']
Feb 13 20:43:53.767118 waagent[1886]: 2025-02-13T20:43:53.767076Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules
Feb 13 20:43:53.780585 waagent[1886]: 2025-02-13T20:43:53.780535Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service
Feb 13 20:43:53.780802 waagent[1886]: 2025-02-13T20:43:53.780757Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup
Feb 13 20:43:53.787518 waagent[1886]: 2025-02-13T20:43:53.787000Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now
Feb 13 20:43:53.794164 systemd[1]: Reloading requested from client PID 1901 ('systemctl') (unit waagent.service)...
Feb 13 20:43:53.794177 systemd[1]: Reloading...
Feb 13 20:43:53.888432 zram_generator::config[1944]: No configuration found.
Feb 13 20:43:53.993853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 20:43:54.090691 systemd[1]: Reloading finished in 296 ms.
Feb 13 20:43:54.122355 waagent[1886]: 2025-02-13T20:43:54.116737Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service
Feb 13 20:43:54.123036 systemd[1]: Reloading requested from client PID 1989 ('systemctl') (unit waagent.service)...
Feb 13 20:43:54.123152 systemd[1]: Reloading...
Feb 13 20:43:54.198402 zram_generator::config[2024]: No configuration found.
Feb 13 20:43:54.320674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 20:43:54.417942 systemd[1]: Reloading finished in 294 ms.
Feb 13 20:43:54.441807 waagent[1886]: 2025-02-13T20:43:54.441713Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service
Feb 13 20:43:54.441928 waagent[1886]: 2025-02-13T20:43:54.441890Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully
Feb 13 20:43:54.767744 waagent[1886]: 2025-02-13T20:43:54.767583Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up.
Feb 13 20:43:54.768278 waagent[1886]: 2025-02-13T20:43:54.768218Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True]
Feb 13 20:43:54.769213 waagent[1886]: 2025-02-13T20:43:54.769119Z INFO ExtHandler ExtHandler Starting env monitor service.
Feb 13 20:43:54.769341 waagent[1886]: 2025-02-13T20:43:54.769275Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb 13 20:43:54.769894 waagent[1886]: 2025-02-13T20:43:54.769764Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16
Feb 13 20:43:54.769894 waagent[1886]: 2025-02-13T20:43:54.769822Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service.
Feb 13 20:43:54.770031 waagent[1886]: 2025-02-13T20:43:54.769959Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb 13 20:43:54.770431 waagent[1886]: 2025-02-13T20:43:54.770378Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread
Feb 13 20:43:54.770833 waagent[1886]: 2025-02-13T20:43:54.770772Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled.
Feb 13 20:43:54.771045 waagent[1886]: 2025-02-13T20:43:54.770998Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route:
Feb 13 20:43:54.771045 waagent[1886]: Iface        Destination        Gateway         Flags        RefCnt        Use        Metric        Mask                MTU        Window        IRTT
Feb 13 20:43:54.771045 waagent[1886]: eth0        00000000        0114C80A        0003        0        0        1024        00000000        0        0        0
Feb 13 20:43:54.771045 waagent[1886]: eth0        0014C80A        00000000        0001        0        0        1024        00FFFFFF        0        0        0
Feb 13 20:43:54.771045 waagent[1886]: eth0        0114C80A        00000000        0005        0        0        1024        FFFFFFFF        0        0        0
Feb 13 20:43:54.771045 waagent[1886]: eth0        10813FA8        0114C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Feb 13 20:43:54.771045 waagent[1886]: eth0        FEA9FEA9        0114C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Feb 13 20:43:54.771406 waagent[1886]: 2025-02-13T20:43:54.771276Z INFO ExtHandler ExtHandler Start Extension Telemetry service.
Feb 13 20:43:54.771946 waagent[1886]: 2025-02-13T20:43:54.771834Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16
Feb 13 20:43:54.772291 waagent[1886]: 2025-02-13T20:43:54.772102Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True
Feb 13 20:43:54.772291 waagent[1886]: 2025-02-13T20:43:54.772187Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status.
Feb 13 20:43:54.772782 waagent[1886]: 2025-02-13T20:43:54.772728Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread
Feb 13 20:43:54.773103 waagent[1886]: 2025-02-13T20:43:54.772992Z INFO EnvHandler ExtHandler Configure routes
Feb 13 20:43:54.773303 waagent[1886]: 2025-02-13T20:43:54.773257Z INFO EnvHandler ExtHandler Gateway:None
Feb 13 20:43:54.773447 waagent[1886]: 2025-02-13T20:43:54.773347Z INFO EnvHandler ExtHandler Routes:None
Feb 13 20:43:54.780064 waagent[1886]: 2025-02-13T20:43:54.779982Z INFO ExtHandler ExtHandler
Feb 13 20:43:54.780292 waagent[1886]: 2025-02-13T20:43:54.780144Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f4d23c33-c29d-404b-b28a-1200bbcb46f6 correlation 9782b4c2-cb65-49f6-be56-347cd5dc5e52 created: 2025-02-13T20:43:11.991707Z]
Feb 13 20:43:54.782179 waagent[1886]: 2025-02-13T20:43:54.782095Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything.
Feb 13 20:43:54.786107 waagent[1886]: 2025-02-13T20:43:54.786049Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms]
Feb 13 20:43:54.790923 waagent[1886]: 2025-02-13T20:43:54.790408Z INFO MonitorHandler ExtHandler Network interfaces:
Feb 13 20:43:54.790923 waagent[1886]: Executing ['ip', '-a', '-o', 'link']:
Feb 13 20:43:54.790923 waagent[1886]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Feb 13 20:43:54.790923 waagent[1886]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:c4:5f:98 brd ff:ff:ff:ff:ff:ff
Feb 13 20:43:54.790923 waagent[1886]: 3: enP9404s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:c4:5f:98 brd ff:ff:ff:ff:ff:ff\    altname enP9404p0s2
Feb 13 20:43:54.790923 waagent[1886]: Executing ['ip', '-4', '-a', '-o', 'address']:
Feb 13 20:43:54.790923 waagent[1886]: 1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
Feb 13 20:43:54.790923 waagent[1886]: 2: eth0    inet 10.200.20.20/24 metric 1024 brd 10.200.20.255 scope global eth0\       valid_lft forever preferred_lft forever
Feb 13 20:43:54.790923 waagent[1886]: Executing ['ip', '-6', '-a', '-o', 'address']:
Feb 13 20:43:54.790923 waagent[1886]: 1: lo    inet6 ::1/128 scope host noprefixroute \       valid_lft forever preferred_lft forever
Feb 13 20:43:54.790923 waagent[1886]: 2: eth0    inet6 fe80::20d:3aff:fec4:5f98/64 scope link proto kernel_ll \       valid_lft forever preferred_lft forever
Feb 13 20:43:54.790923 waagent[1886]: 3: enP9404s1    inet6 fe80::20d:3aff:fec4:5f98/64 scope link proto kernel_ll \       valid_lft forever preferred_lft forever
Feb 13 20:43:54.839623 waagent[1886]: 2025-02-13T20:43:54.839560Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4F41E12B-98D0-4EE4-AD07-3D6C9555880D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0]
Feb 13 20:43:54.846287 waagent[1886]: 2025-02-13T20:43:54.846238Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules:
Feb 13 20:43:54.846287 waagent[1886]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Feb 13 20:43:54.846287 waagent[1886]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 20:43:54.846287 waagent[1886]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Feb 13 20:43:54.846287 waagent[1886]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 20:43:54.846287 waagent[1886]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Feb 13 20:43:54.846287 waagent[1886]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 20:43:54.846287 waagent[1886]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Feb 13 20:43:54.846287 waagent[1886]:        4      594 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Feb 13 20:43:54.846287 waagent[1886]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Feb 13 20:43:54.849864 waagent[1886]: 2025-02-13T20:43:54.849803Z INFO EnvHandler ExtHandler Current Firewall rules:
Feb 13 20:43:54.849864 waagent[1886]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Feb 13 20:43:54.849864 waagent[1886]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 20:43:54.849864 waagent[1886]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Feb 13 20:43:54.849864 waagent[1886]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 20:43:54.849864 waagent[1886]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Feb 13 20:43:54.849864 waagent[1886]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 20:43:54.849864 waagent[1886]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Feb 13 20:43:54.849864 waagent[1886]:        5      646 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Feb 13 20:43:54.849864 waagent[1886]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Feb 13 20:43:54.850550 waagent[1886]: 2025-02-13T20:43:54.850418Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300
Feb 13 20:44:02.263633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 20:44:02.273541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:44:02.376080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:02.387891 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 20:44:02.425117 kubelet[2116]: E0213 20:44:02.425027    2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 20:44:02.428023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 20:44:02.428182 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 20:44:12.622356 chronyd[1665]: Selected source PHC0
Feb 13 20:44:12.678728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 13 20:44:12.688615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:44:12.785306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:12.789991 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 20:44:12.827833 kubelet[2131]: E0213 20:44:12.827720    2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 20:44:12.830584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 20:44:12.830866 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 20:44:19.022996 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 20:44:19.033791 systemd[1]: Started sshd@0-10.200.20.20:22-10.200.16.10:33016.service - OpenSSH per-connection server daemon (10.200.16.10:33016).
Feb 13 20:44:19.498106 sshd[2139]: Accepted publickey for core from 10.200.16.10 port 33016 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:44:19.499463 sshd[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:44:19.503963 systemd-logind[1698]: New session 3 of user core.
Feb 13 20:44:19.517526 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 20:44:19.905118 systemd[1]: Started sshd@1-10.200.20.20:22-10.200.16.10:33022.service - OpenSSH per-connection server daemon (10.200.16.10:33022).
Feb 13 20:44:20.385300 sshd[2144]: Accepted publickey for core from 10.200.16.10 port 33022 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:44:20.386638 sshd[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:44:20.390795 systemd-logind[1698]: New session 4 of user core.
Feb 13 20:44:20.398621 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 20:44:20.746145 sshd[2144]: pam_unix(sshd:session): session closed for user core
Feb 13 20:44:20.749648 systemd[1]: sshd@1-10.200.20.20:22-10.200.16.10:33022.service: Deactivated successfully.
Feb 13 20:44:20.751271 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 20:44:20.752636 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit.
Feb 13 20:44:20.753684 systemd-logind[1698]: Removed session 4.
Feb 13 20:44:20.833708 systemd[1]: Started sshd@2-10.200.20.20:22-10.200.16.10:33038.service - OpenSSH per-connection server daemon (10.200.16.10:33038).
Feb 13 20:44:21.318139 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 33038 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:44:21.319552 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:44:21.323635 systemd-logind[1698]: New session 5 of user core.
Feb 13 20:44:21.334494 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 20:44:21.678325 sshd[2151]: pam_unix(sshd:session): session closed for user core
Feb 13 20:44:21.681718 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit.
Feb 13 20:44:21.681929 systemd[1]: sshd@2-10.200.20.20:22-10.200.16.10:33038.service: Deactivated successfully.
Feb 13 20:44:21.683619 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 20:44:21.686001 systemd-logind[1698]: Removed session 5.
Feb 13 20:44:21.768650 systemd[1]: Started sshd@3-10.200.20.20:22-10.200.16.10:33050.service - OpenSSH per-connection server daemon (10.200.16.10:33050).
Feb 13 20:44:22.263104 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 33050 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:44:22.264471 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:44:22.268904 systemd-logind[1698]: New session 6 of user core.
Feb 13 20:44:22.274492 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 20:44:22.627720 sshd[2158]: pam_unix(sshd:session): session closed for user core
Feb 13 20:44:22.631254 systemd[1]: sshd@3-10.200.20.20:22-10.200.16.10:33050.service: Deactivated successfully.
Feb 13 20:44:22.632856 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 20:44:22.633535 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit.
Feb 13 20:44:22.634493 systemd-logind[1698]: Removed session 6.
Feb 13 20:44:22.714046 systemd[1]: Started sshd@4-10.200.20.20:22-10.200.16.10:33064.service - OpenSSH per-connection server daemon (10.200.16.10:33064).
Feb 13 20:44:23.070064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Feb 13 20:44:23.078754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:44:23.178129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:23.192620 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 20:44:23.194922 sshd[2165]: Accepted publickey for core from 10.200.16.10 port 33064 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:44:23.194844 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:44:23.200427 systemd-logind[1698]: New session 7 of user core.
Feb 13 20:44:23.204876 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 20:44:23.232891 kubelet[2175]: E0213 20:44:23.232776    2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 20:44:23.235197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 20:44:23.235367 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 20:44:23.544813 sudo[2183]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 20:44:23.545086 sudo[2183]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 20:44:23.559232 sudo[2183]: pam_unix(sudo:session): session closed for user root
Feb 13 20:44:23.649618 sshd[2165]: pam_unix(sshd:session): session closed for user core
Feb 13 20:44:23.653530 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit.
Feb 13 20:44:23.654178 systemd[1]: sshd@4-10.200.20.20:22-10.200.16.10:33064.service: Deactivated successfully.
Feb 13 20:44:23.656173 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 20:44:23.657558 systemd-logind[1698]: Removed session 7.
Feb 13 20:44:23.741017 systemd[1]: Started sshd@5-10.200.20.20:22-10.200.16.10:33070.service - OpenSSH per-connection server daemon (10.200.16.10:33070).
Feb 13 20:44:24.226899 sshd[2188]: Accepted publickey for core from 10.200.16.10 port 33070 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:44:24.229577 sshd[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:44:24.233589 systemd-logind[1698]: New session 8 of user core.
Feb 13 20:44:24.243482 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 20:44:24.501163 sudo[2192]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 20:44:24.501718 sudo[2192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 20:44:24.505113 sudo[2192]: pam_unix(sudo:session): session closed for user root
Feb 13 20:44:24.510183 sudo[2191]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Feb 13 20:44:24.510486 sudo[2191]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 20:44:24.524621 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Feb 13 20:44:24.527380 auditctl[2195]: No rules
Feb 13 20:44:24.528105 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 20:44:24.528435 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Feb 13 20:44:24.530324 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Feb 13 20:44:24.556151 augenrules[2213]: No rules
Feb 13 20:44:24.557766 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Feb 13 20:44:24.559604 sudo[2191]: pam_unix(sudo:session): session closed for user root
Feb 13 20:44:24.635180 sshd[2188]: pam_unix(sshd:session): session closed for user core
Feb 13 20:44:24.638262 systemd[1]: sshd@5-10.200.20.20:22-10.200.16.10:33070.service: Deactivated successfully.
Feb 13 20:44:24.640147 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 20:44:24.641810 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit.
Feb 13 20:44:24.642961 systemd-logind[1698]: Removed session 8.
Feb 13 20:44:24.716827 systemd[1]: Started sshd@6-10.200.20.20:22-10.200.16.10:33074.service - OpenSSH per-connection server daemon (10.200.16.10:33074).
Feb 13 20:44:25.179525 sshd[2221]: Accepted publickey for core from 10.200.16.10 port 33074 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:44:25.180874 sshd[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:44:25.184819 systemd-logind[1698]: New session 9 of user core.
Feb 13 20:44:25.192492 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 20:44:25.436794 sudo[2224]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 20:44:25.437654 sudo[2224]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 20:44:25.761844 (dockerd)[2239]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Feb 13 20:44:25.762200 systemd[1]: Starting docker.service - Docker Application Container Engine...
Feb 13 20:44:25.994877 dockerd[2239]: time="2025-02-13T20:44:25.994812732Z" level=info msg="Starting up"
Feb 13 20:44:26.155361 dockerd[2239]: time="2025-02-13T20:44:26.154870761Z" level=info msg="Loading containers: start."
Feb 13 20:44:26.262353 kernel: Initializing XFRM netlink socket
Feb 13 20:44:26.326125 systemd-networkd[1626]: docker0: Link UP
Feb 13 20:44:26.351873 dockerd[2239]: time="2025-02-13T20:44:26.351835664Z" level=info msg="Loading containers: done."
Feb 13 20:44:26.369285 dockerd[2239]: time="2025-02-13T20:44:26.369162880Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 13 20:44:26.369566 dockerd[2239]: time="2025-02-13T20:44:26.369393480Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0
Feb 13 20:44:26.369926 dockerd[2239]: time="2025-02-13T20:44:26.369733240Z" level=info msg="Daemon has completed initialization"
Feb 13 20:44:26.431180 dockerd[2239]: time="2025-02-13T20:44:26.430587017Z" level=info msg="API listen on /run/docker.sock"
Feb 13 20:44:26.431401 systemd[1]: Started docker.service - Docker Application Container Engine.
Feb 13 20:44:27.509968 containerd[1726]: time="2025-02-13T20:44:27.509924418Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\""
Feb 13 20:44:28.352983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489097486.mount: Deactivated successfully.
Feb 13 20:44:29.616791 containerd[1726]: time="2025-02-13T20:44:29.616726247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:29.622754 containerd[1726]: time="2025-02-13T20:44:29.622495373Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375"
Feb 13 20:44:29.625470 containerd[1726]: time="2025-02-13T20:44:29.625395856Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:29.630536 containerd[1726]: time="2025-02-13T20:44:29.630484501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:29.631862 containerd[1726]: time="2025-02-13T20:44:29.631674102Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.121707164s"
Feb 13 20:44:29.631862 containerd[1726]: time="2025-02-13T20:44:29.631715862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\""
Feb 13 20:44:29.632886 containerd[1726]: time="2025-02-13T20:44:29.632850263Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\""
Feb 13 20:44:30.835145 containerd[1726]: time="2025-02-13T20:44:30.835076559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:30.839697 containerd[1726]: time="2025-02-13T20:44:30.839355643Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773"
Feb 13 20:44:30.843895 containerd[1726]: time="2025-02-13T20:44:30.843847408Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:30.851088 containerd[1726]: time="2025-02-13T20:44:30.851021055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:30.852183 containerd[1726]: time="2025-02-13T20:44:30.852047857Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.219155514s"
Feb 13 20:44:30.852183 containerd[1726]: time="2025-02-13T20:44:30.852085217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\""
Feb 13 20:44:30.853161 containerd[1726]: time="2025-02-13T20:44:30.853105418Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\""
Feb 13 20:44:31.998842 containerd[1726]: time="2025-02-13T20:44:31.997802573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:32.001406 containerd[1726]: time="2025-02-13T20:44:32.001364537Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540"
Feb 13 20:44:32.007842 containerd[1726]: time="2025-02-13T20:44:32.007795903Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:32.017357 containerd[1726]: time="2025-02-13T20:44:32.015645992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:32.017357 containerd[1726]: time="2025-02-13T20:44:32.016825793Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.163686055s"
Feb 13 20:44:32.017357 containerd[1726]: time="2025-02-13T20:44:32.016864953Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\""
Feb 13 20:44:32.017977 containerd[1726]: time="2025-02-13T20:44:32.017763314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\""
Feb 13 20:44:33.147645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913742107.mount: Deactivated successfully.
Feb 13 20:44:33.320697 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Feb 13 20:44:33.328669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:44:33.431073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:33.439666 (kubelet)[2451]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 20:44:33.482719 kubelet[2451]: E0213 20:44:33.482584    2451 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 20:44:33.484757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 20:44:33.484900 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 20:44:33.792442 containerd[1726]: time="2025-02-13T20:44:33.792129327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:33.795756 containerd[1726]: time="2025-02-13T20:44:33.795574610Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256"
Feb 13 20:44:33.799661 containerd[1726]: time="2025-02-13T20:44:33.799595575Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:33.803400 containerd[1726]: time="2025-02-13T20:44:33.803316538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:33.804368 containerd[1726]: time="2025-02-13T20:44:33.803885739Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.786087905s"
Feb 13 20:44:33.804368 containerd[1726]: time="2025-02-13T20:44:33.803920939Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\""
Feb 13 20:44:33.804368 containerd[1726]: time="2025-02-13T20:44:33.804359460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Feb 13 20:44:34.549589 update_engine[1703]: I20250213 20:44:34.549371  1703 update_attempter.cc:509] Updating boot flags...
Feb 13 20:44:34.561305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757012387.mount: Deactivated successfully.
Feb 13 20:44:34.616375 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2475)
Feb 13 20:44:36.070067 kernel: hv_balloon: Max. dynamic memory size: 4096 MB
Feb 13 20:44:36.175605 containerd[1726]: time="2025-02-13T20:44:36.175549456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:36.180014 containerd[1726]: time="2025-02-13T20:44:36.179973660Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381"
Feb 13 20:44:36.183976 containerd[1726]: time="2025-02-13T20:44:36.183941464Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:36.189172 containerd[1726]: time="2025-02-13T20:44:36.189102550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:36.190312 containerd[1726]: time="2025-02-13T20:44:36.190172951Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.385783731s"
Feb 13 20:44:36.190312 containerd[1726]: time="2025-02-13T20:44:36.190212191Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Feb 13 20:44:36.191226 containerd[1726]: time="2025-02-13T20:44:36.191020952Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\""
Feb 13 20:44:36.833815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328576802.mount: Deactivated successfully.
Feb 13 20:44:36.858523 containerd[1726]: time="2025-02-13T20:44:36.858467495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:36.861371 containerd[1726]: time="2025-02-13T20:44:36.861152699Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703"
Feb 13 20:44:36.865251 containerd[1726]: time="2025-02-13T20:44:36.865199705Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:36.870320 containerd[1726]: time="2025-02-13T20:44:36.870261433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:36.871323 containerd[1726]: time="2025-02-13T20:44:36.870920674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 679.865002ms"
Feb 13 20:44:36.871323 containerd[1726]: time="2025-02-13T20:44:36.870955994Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\""
Feb 13 20:44:36.871458 containerd[1726]: time="2025-02-13T20:44:36.871421354Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\""
Feb 13 20:44:37.561343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956018209.mount: Deactivated successfully.
Feb 13 20:44:43.570187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Feb 13 20:44:43.578627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:44:43.844078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:43.848991 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 20:44:43.884974 kubelet[2606]: E0213 20:44:43.884915    2606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 20:44:43.887414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 20:44:43.887564 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 20:44:49.418996 containerd[1726]: time="2025-02-13T20:44:49.418936954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:49.464147 containerd[1726]: time="2025-02-13T20:44:49.464101270Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425"
Feb 13 20:44:49.511308 containerd[1726]: time="2025-02-13T20:44:49.511247389Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:49.812654 containerd[1726]: time="2025-02-13T20:44:49.812301854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:44:49.813578 containerd[1726]: time="2025-02-13T20:44:49.813174575Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 12.941725061s"
Feb 13 20:44:49.813578 containerd[1726]: time="2025-02-13T20:44:49.813211775Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\""
Feb 13 20:44:54.070199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
Feb 13 20:44:54.081899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:44:54.176501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:54.181546 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 20:44:54.222849 kubelet[2642]: E0213 20:44:54.222806    2642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 20:44:54.225615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 20:44:54.226140 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 20:44:57.535148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:57.544605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:44:57.575593 systemd[1]: Reloading requested from client PID 2656 ('systemctl') (unit session-9.scope)...
Feb 13 20:44:57.575758 systemd[1]: Reloading...
Feb 13 20:44:57.688374 zram_generator::config[2702]: No configuration found.
Feb 13 20:44:57.797201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 20:44:57.891175 systemd[1]: Reloading finished in 314 ms.
Feb 13 20:44:57.935598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:57.938212 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:44:57.942397 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 20:44:57.942707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:57.949662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:44:58.554150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:44:58.566711 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 20:44:58.604817 kubelet[2765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 20:44:58.604817 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 20:44:58.604817 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 20:44:58.605223 kubelet[2765]: I0213 20:44:58.604866    2765 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 20:44:59.444838 kubelet[2765]: I0213 20:44:59.444783    2765 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Feb 13 20:44:59.444838 kubelet[2765]: I0213 20:44:59.444818    2765 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 20:44:59.445081 kubelet[2765]: I0213 20:44:59.445057    2765 server.go:929] "Client rotation is on, will bootstrap in background"
Feb 13 20:44:59.462119 kubelet[2765]: E0213 20:44:59.462076    2765 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:44:59.462520 kubelet[2765]: I0213 20:44:59.462366    2765 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 20:44:59.469629 kubelet[2765]: E0213 20:44:59.469588    2765 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Feb 13 20:44:59.469629 kubelet[2765]: I0213 20:44:59.469623    2765 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Feb 13 20:44:59.473660 kubelet[2765]: I0213 20:44:59.473627    2765 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 20:44:59.474674 kubelet[2765]: I0213 20:44:59.473787    2765 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Feb 13 20:44:59.474674 kubelet[2765]: I0213 20:44:59.474001    2765 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 20:44:59.474674 kubelet[2765]: I0213 20:44:59.474031    2765 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-d3f644b76a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Feb 13 20:44:59.474674 kubelet[2765]: I0213 20:44:59.474313    2765 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 20:44:59.474864 kubelet[2765]: I0213 20:44:59.474324    2765 container_manager_linux.go:300] "Creating device plugin manager"
Feb 13 20:44:59.474864 kubelet[2765]: I0213 20:44:59.474479    2765 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 20:44:59.476241 kubelet[2765]: I0213 20:44:59.476215    2765 kubelet.go:408] "Attempting to sync node with API server"
Feb 13 20:44:59.476365 kubelet[2765]: I0213 20:44:59.476353    2765 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 20:44:59.476448 kubelet[2765]: I0213 20:44:59.476439    2765 kubelet.go:314] "Adding apiserver pod source"
Feb 13 20:44:59.476507 kubelet[2765]: I0213 20:44:59.476499    2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 20:44:59.479137 kubelet[2765]: W0213 20:44:59.479089    2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused
Feb 13 20:44:59.479290 kubelet[2765]: E0213 20:44:59.479270    2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:44:59.479473 kubelet[2765]: I0213 20:44:59.479457    2765 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Feb 13 20:44:59.481076 kubelet[2765]: I0213 20:44:59.481053    2765 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 20:44:59.481627 kubelet[2765]: W0213 20:44:59.481610    2765 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 20:44:59.482164 kubelet[2765]: W0213 20:44:59.482122    2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-d3f644b76a&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused
Feb 13 20:44:59.482413 kubelet[2765]: E0213 20:44:59.482286    2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-d3f644b76a&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:44:59.483676 kubelet[2765]: I0213 20:44:59.483523    2765 server.go:1269] "Started kubelet"
Feb 13 20:44:59.484875 kubelet[2765]: I0213 20:44:59.484839    2765 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 20:44:59.485899 kubelet[2765]: I0213 20:44:59.485745    2765 server.go:460] "Adding debug handlers to kubelet server"
Feb 13 20:44:59.486673 kubelet[2765]: I0213 20:44:59.486617    2765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 20:44:59.487161 kubelet[2765]: I0213 20:44:59.487000    2765 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 20:44:59.488561 kubelet[2765]: E0213 20:44:59.487568    2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-d3f644b76a.1823df637cc9be3a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-d3f644b76a,UID:ci-4081.3.1-a-d3f644b76a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-d3f644b76a,},FirstTimestamp:2025-02-13 20:44:59.483495994 +0000 UTC m=+0.913692516,LastTimestamp:2025-02-13 20:44:59.483495994 +0000 UTC m=+0.913692516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-d3f644b76a,}"
Feb 13 20:44:59.488816 kubelet[2765]: I0213 20:44:59.488544    2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 20:44:59.488816 kubelet[2765]: I0213 20:44:59.488684    2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Feb 13 20:44:59.490521 kubelet[2765]: I0213 20:44:59.490487    2765 volume_manager.go:289] "Starting Kubelet Volume Manager"
Feb 13 20:44:59.491113 kubelet[2765]: I0213 20:44:59.490603    2765 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Feb 13 20:44:59.491113 kubelet[2765]: I0213 20:44:59.490663    2765 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 20:44:59.491113 kubelet[2765]: W0213 20:44:59.491016    2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused
Feb 13 20:44:59.491113 kubelet[2765]: E0213 20:44:59.491057    2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:44:59.491669 kubelet[2765]: E0213 20:44:59.491296    2765 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-d3f644b76a\" not found"
Feb 13 20:44:59.491669 kubelet[2765]: E0213 20:44:59.491391    2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-d3f644b76a?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="200ms"
Feb 13 20:44:59.493825 kubelet[2765]: I0213 20:44:59.493793    2765 factory.go:221] Registration of the containerd container factory successfully
Feb 13 20:44:59.493825 kubelet[2765]: I0213 20:44:59.493814    2765 factory.go:221] Registration of the systemd container factory successfully
Feb 13 20:44:59.493922 kubelet[2765]: I0213 20:44:59.493892    2765 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 20:44:59.508692 kubelet[2765]: I0213 20:44:59.508541    2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 20:44:59.509540 kubelet[2765]: I0213 20:44:59.509522    2765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 20:44:59.509932 kubelet[2765]: I0213 20:44:59.509616    2765 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 20:44:59.509932 kubelet[2765]: I0213 20:44:59.509641    2765 kubelet.go:2321] "Starting kubelet main sync loop"
Feb 13 20:44:59.509932 kubelet[2765]: E0213 20:44:59.509680    2765 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 20:44:59.519379 kubelet[2765]: W0213 20:44:59.519312    2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused
Feb 13 20:44:59.519986 kubelet[2765]: E0213 20:44:59.519961    2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:44:59.591840 kubelet[2765]: E0213 20:44:59.591786    2765 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-d3f644b76a\" not found"
Feb 13 20:44:59.610026 kubelet[2765]: E0213 20:44:59.610007    2765 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 13 20:44:59.692538 kubelet[2765]: E0213 20:44:59.692275    2765 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-d3f644b76a\" not found"
Feb 13 20:44:59.692674 kubelet[2765]: E0213 20:44:59.692590    2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-d3f644b76a?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="400ms"
Feb 13 20:44:59.793092 kubelet[2765]: E0213 20:44:59.792992    2765 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-d3f644b76a\" not found"
Feb 13 20:44:59.807704 kubelet[2765]: I0213 20:44:59.807672    2765 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 20:44:59.807704 kubelet[2765]: I0213 20:44:59.807693    2765 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 20:44:59.807844 kubelet[2765]: I0213 20:44:59.807723    2765 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 20:44:59.810863 kubelet[2765]: E0213 20:44:59.810813    2765 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 13 20:44:59.812108 kubelet[2765]: I0213 20:44:59.811994    2765 policy_none.go:49] "None policy: Start"
Feb 13 20:44:59.813041 kubelet[2765]: I0213 20:44:59.812938    2765 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 20:44:59.813041 kubelet[2765]: I0213 20:44:59.812983    2765 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 20:44:59.821835 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 20:44:59.838111 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 20:44:59.841017 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 20:44:59.850458 kubelet[2765]: I0213 20:44:59.850422    2765 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 20:44:59.850675 kubelet[2765]: I0213 20:44:59.850654    2765 eviction_manager.go:189] "Eviction manager: starting control loop"
Feb 13 20:44:59.850707 kubelet[2765]: I0213 20:44:59.850674    2765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 20:44:59.852679 kubelet[2765]: I0213 20:44:59.852489    2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 20:44:59.854829 kubelet[2765]: E0213 20:44:59.854772    2765 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-a-d3f644b76a\" not found"
Feb 13 20:44:59.952939 kubelet[2765]: I0213 20:44:59.952838    2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:44:59.953269 kubelet[2765]: E0213 20:44:59.953239    2765 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.093399 kubelet[2765]: E0213 20:45:00.093356    2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-d3f644b76a?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="800ms"
Feb 13 20:45:00.155353 kubelet[2765]: I0213 20:45:00.155239    2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.155594 kubelet[2765]: E0213 20:45:00.155559    2765 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.221609 systemd[1]: Created slice kubepods-burstable-podbf23dab1765bc86e527202369ce1850c.slice - libcontainer container kubepods-burstable-podbf23dab1765bc86e527202369ce1850c.slice.
Feb 13 20:45:00.240647 systemd[1]: Created slice kubepods-burstable-podf397481b8c83694a691939b3ae022018.slice - libcontainer container kubepods-burstable-podf397481b8c83694a691939b3ae022018.slice.
Feb 13 20:45:00.253707 systemd[1]: Created slice kubepods-burstable-poda64d6397b114d75a7b6f3f869a60fb1f.slice - libcontainer container kubepods-burstable-poda64d6397b114d75a7b6f3f869a60fb1f.slice.
Feb 13 20:45:00.294778 kubelet[2765]: I0213 20:45:00.294740    2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.295111 kubelet[2765]: I0213 20:45:00.294984    2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.295111 kubelet[2765]: I0213 20:45:00.295012    2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.295111 kubelet[2765]: I0213 20:45:00.295032    2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a64d6397b114d75a7b6f3f869a60fb1f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-d3f644b76a\" (UID: \"a64d6397b114d75a7b6f3f869a60fb1f\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.295111 kubelet[2765]: I0213 20:45:00.295050    2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf23dab1765bc86e527202369ce1850c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-d3f644b76a\" (UID: \"bf23dab1765bc86e527202369ce1850c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.295111 kubelet[2765]: I0213 20:45:00.295067    2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf23dab1765bc86e527202369ce1850c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-d3f644b76a\" (UID: \"bf23dab1765bc86e527202369ce1850c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.295259 kubelet[2765]: I0213 20:45:00.295094    2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.295259 kubelet[2765]: I0213 20:45:00.295124    2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf23dab1765bc86e527202369ce1850c-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-d3f644b76a\" (UID: \"bf23dab1765bc86e527202369ce1850c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.295259 kubelet[2765]: I0213 20:45:00.295142    2765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.538091 containerd[1726]: time="2025-02-13T20:45:00.537977438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-d3f644b76a,Uid:bf23dab1765bc86e527202369ce1850c,Namespace:kube-system,Attempt:0,}"
Feb 13 20:45:00.552862 containerd[1726]: time="2025-02-13T20:45:00.552600740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-d3f644b76a,Uid:f397481b8c83694a691939b3ae022018,Namespace:kube-system,Attempt:0,}"
Feb 13 20:45:00.557738 containerd[1726]: time="2025-02-13T20:45:00.557062667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-d3f644b76a,Uid:a64d6397b114d75a7b6f3f869a60fb1f,Namespace:kube-system,Attempt:0,}"
Feb 13 20:45:00.558371 kubelet[2765]: I0213 20:45:00.558326    2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.558839 kubelet[2765]: E0213 20:45:00.558794    2765 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:00.626963 kubelet[2765]: W0213 20:45:00.626904    2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused
Feb 13 20:45:00.627412 kubelet[2765]: E0213 20:45:00.627367    2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:45:00.806803 kubelet[2765]: W0213 20:45:00.806660    2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused
Feb 13 20:45:00.806803 kubelet[2765]: E0213 20:45:00.806726    2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:45:00.893792 kubelet[2765]: E0213 20:45:00.893741    2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-d3f644b76a?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="1.6s"
Feb 13 20:45:00.916512 kubelet[2765]: W0213 20:45:00.916419    2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-d3f644b76a&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused
Feb 13 20:45:00.916512 kubelet[2765]: E0213 20:45:00.916482    2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-d3f644b76a&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:45:01.018871 kubelet[2765]: W0213 20:45:01.018812    2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused
Feb 13 20:45:01.018993 kubelet[2765]: E0213 20:45:01.018877    2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:45:01.364402 kubelet[2765]: I0213 20:45:01.364024    2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:01.364402 kubelet[2765]: E0213 20:45:01.364353    2765 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:01.571040 kubelet[2765]: E0213 20:45:01.570987    2765 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:45:01.876503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957549689.mount: Deactivated successfully.
Feb 13 20:45:01.920001 containerd[1726]: time="2025-02-13T20:45:01.919945076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 20:45:01.922844 containerd[1726]: time="2025-02-13T20:45:01.922797560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173"
Feb 13 20:45:01.925750 containerd[1726]: time="2025-02-13T20:45:01.925707325Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 20:45:01.931694 containerd[1726]: time="2025-02-13T20:45:01.930960494Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 20:45:01.939019 containerd[1726]: time="2025-02-13T20:45:01.938928027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 20:45:01.944447 containerd[1726]: time="2025-02-13T20:45:01.943425435Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 20:45:01.949026 containerd[1726]: time="2025-02-13T20:45:01.948969364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 20:45:01.953949 containerd[1726]: time="2025-02-13T20:45:01.953892932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 20:45:01.955176 containerd[1726]: time="2025-02-13T20:45:01.954719654Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.402035154s"
Feb 13 20:45:01.956339 containerd[1726]: time="2025-02-13T20:45:01.956300936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.418243498s"
Feb 13 20:45:01.961238 containerd[1726]: time="2025-02-13T20:45:01.961191544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.404043277s"
Feb 13 20:45:02.192585 containerd[1726]: time="2025-02-13T20:45:02.192087010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:02.192585 containerd[1726]: time="2025-02-13T20:45:02.192144330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:02.192585 containerd[1726]: time="2025-02-13T20:45:02.192166370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:02.192585 containerd[1726]: time="2025-02-13T20:45:02.192261730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:02.201568 containerd[1726]: time="2025-02-13T20:45:02.201215065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:02.203293 containerd[1726]: time="2025-02-13T20:45:02.201805746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:02.203293 containerd[1726]: time="2025-02-13T20:45:02.202510907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:02.203293 containerd[1726]: time="2025-02-13T20:45:02.202952428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:02.205918 containerd[1726]: time="2025-02-13T20:45:02.205533032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:02.205918 containerd[1726]: time="2025-02-13T20:45:02.205679193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:02.205918 containerd[1726]: time="2025-02-13T20:45:02.205702553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:02.206559 containerd[1726]: time="2025-02-13T20:45:02.206068073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:02.218549 systemd[1]: Started cri-containerd-34bca3318ffe0f34afe3c89593b2c754e2155ed7f9a08eb687297718894448ab.scope - libcontainer container 34bca3318ffe0f34afe3c89593b2c754e2155ed7f9a08eb687297718894448ab.
Feb 13 20:45:02.237631 systemd[1]: Started cri-containerd-b9c8dca789829d9ff2511ae1073139335c2b448df2f7065da8d9d66681183ae8.scope - libcontainer container b9c8dca789829d9ff2511ae1073139335c2b448df2f7065da8d9d66681183ae8.
Feb 13 20:45:02.242467 systemd[1]: Started cri-containerd-eb9abf6893cf2d80fd26108c0d236df22e393397d04e0f557560c56abbf4dfe7.scope - libcontainer container eb9abf6893cf2d80fd26108c0d236df22e393397d04e0f557560c56abbf4dfe7.
Feb 13 20:45:02.274769 containerd[1726]: time="2025-02-13T20:45:02.274711988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-d3f644b76a,Uid:bf23dab1765bc86e527202369ce1850c,Namespace:kube-system,Attempt:0,} returns sandbox id \"34bca3318ffe0f34afe3c89593b2c754e2155ed7f9a08eb687297718894448ab\""
Feb 13 20:45:02.280421 containerd[1726]: time="2025-02-13T20:45:02.279990477Z" level=info msg="CreateContainer within sandbox \"34bca3318ffe0f34afe3c89593b2c754e2155ed7f9a08eb687297718894448ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 13 20:45:02.294127 containerd[1726]: time="2025-02-13T20:45:02.294087300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-d3f644b76a,Uid:f397481b8c83694a691939b3ae022018,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb9abf6893cf2d80fd26108c0d236df22e393397d04e0f557560c56abbf4dfe7\""
Feb 13 20:45:02.297778 containerd[1726]: time="2025-02-13T20:45:02.297720506Z" level=info msg="CreateContainer within sandbox \"eb9abf6893cf2d80fd26108c0d236df22e393397d04e0f557560c56abbf4dfe7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 13 20:45:02.304136 containerd[1726]: time="2025-02-13T20:45:02.304088717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-d3f644b76a,Uid:a64d6397b114d75a7b6f3f869a60fb1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9c8dca789829d9ff2511ae1073139335c2b448df2f7065da8d9d66681183ae8\""
Feb 13 20:45:02.307492 containerd[1726]: time="2025-02-13T20:45:02.307415762Z" level=info msg="CreateContainer within sandbox \"b9c8dca789829d9ff2511ae1073139335c2b448df2f7065da8d9d66681183ae8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 13 20:45:02.338736 containerd[1726]: time="2025-02-13T20:45:02.338378614Z" level=info msg="CreateContainer within sandbox \"34bca3318ffe0f34afe3c89593b2c754e2155ed7f9a08eb687297718894448ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"33ca0c9191f9f63539cbffc8a59dc9b19108024574cf4308f7f4671b2a0fb723\""
Feb 13 20:45:02.339531 containerd[1726]: time="2025-02-13T20:45:02.339487296Z" level=info msg="StartContainer for \"33ca0c9191f9f63539cbffc8a59dc9b19108024574cf4308f7f4671b2a0fb723\""
Feb 13 20:45:02.364540 systemd[1]: Started cri-containerd-33ca0c9191f9f63539cbffc8a59dc9b19108024574cf4308f7f4671b2a0fb723.scope - libcontainer container 33ca0c9191f9f63539cbffc8a59dc9b19108024574cf4308f7f4671b2a0fb723.
Feb 13 20:45:02.372720 containerd[1726]: time="2025-02-13T20:45:02.372666471Z" level=info msg="CreateContainer within sandbox \"eb9abf6893cf2d80fd26108c0d236df22e393397d04e0f557560c56abbf4dfe7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bb24e715357f7fcbf541d9401656efa9714e5cd7595b9c0b811cb6fa8cebddc5\""
Feb 13 20:45:02.373556 containerd[1726]: time="2025-02-13T20:45:02.373192912Z" level=info msg="StartContainer for \"bb24e715357f7fcbf541d9401656efa9714e5cd7595b9c0b811cb6fa8cebddc5\""
Feb 13 20:45:02.400737 containerd[1726]: time="2025-02-13T20:45:02.400653398Z" level=info msg="CreateContainer within sandbox \"b9c8dca789829d9ff2511ae1073139335c2b448df2f7065da8d9d66681183ae8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce7adda2078b2a54449d4040c0ac2b5d53e496ac807354c2136d96f6c377f1cf\""
Feb 13 20:45:02.401813 containerd[1726]: time="2025-02-13T20:45:02.401068279Z" level=info msg="StartContainer for \"ce7adda2078b2a54449d4040c0ac2b5d53e496ac807354c2136d96f6c377f1cf\""
Feb 13 20:45:02.401549 systemd[1]: Started cri-containerd-bb24e715357f7fcbf541d9401656efa9714e5cd7595b9c0b811cb6fa8cebddc5.scope - libcontainer container bb24e715357f7fcbf541d9401656efa9714e5cd7595b9c0b811cb6fa8cebddc5.
Feb 13 20:45:02.417616 containerd[1726]: time="2025-02-13T20:45:02.417563906Z" level=info msg="StartContainer for \"33ca0c9191f9f63539cbffc8a59dc9b19108024574cf4308f7f4671b2a0fb723\" returns successfully"
Feb 13 20:45:02.437696 systemd[1]: Started cri-containerd-ce7adda2078b2a54449d4040c0ac2b5d53e496ac807354c2136d96f6c377f1cf.scope - libcontainer container ce7adda2078b2a54449d4040c0ac2b5d53e496ac807354c2136d96f6c377f1cf.
Feb 13 20:45:02.463948 containerd[1726]: time="2025-02-13T20:45:02.463416423Z" level=info msg="StartContainer for \"bb24e715357f7fcbf541d9401656efa9714e5cd7595b9c0b811cb6fa8cebddc5\" returns successfully"
Feb 13 20:45:02.472118 kubelet[2765]: W0213 20:45:02.471358    2765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused
Feb 13 20:45:02.472118 kubelet[2765]: E0213 20:45:02.471401    2765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError"
Feb 13 20:45:02.495729 kubelet[2765]: E0213 20:45:02.495658    2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-d3f644b76a?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="3.2s"
Feb 13 20:45:02.501235 containerd[1726]: time="2025-02-13T20:45:02.501186526Z" level=info msg="StartContainer for \"ce7adda2078b2a54449d4040c0ac2b5d53e496ac807354c2136d96f6c377f1cf\" returns successfully"
Feb 13 20:45:02.968438 kubelet[2765]: I0213 20:45:02.967790    2765 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:04.853859 kubelet[2765]: I0213 20:45:04.853822    2765 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:04.854820 kubelet[2765]: E0213 20:45:04.854239    2765 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.1-a-d3f644b76a\": node \"ci-4081.3.1-a-d3f644b76a\" not found"
Feb 13 20:45:05.483532 kubelet[2765]: I0213 20:45:05.483494    2765 apiserver.go:52] "Watching apiserver"
Feb 13 20:45:05.491110 kubelet[2765]: I0213 20:45:05.491064    2765 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Feb 13 20:45:07.007066 systemd[1]: Reloading requested from client PID 3037 ('systemctl') (unit session-9.scope)...
Feb 13 20:45:07.007082 systemd[1]: Reloading...
Feb 13 20:45:07.089408 zram_generator::config[3074]: No configuration found.
Feb 13 20:45:07.213243 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 20:45:07.322889 systemd[1]: Reloading finished in 315 ms.
Feb 13 20:45:07.363839 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:45:07.377624 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 20:45:07.377993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:45:07.378168 systemd[1]: kubelet.service: Consumed 1.272s CPU time, 115.0M memory peak, 0B memory swap peak.
Feb 13 20:45:07.382722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 20:45:07.477691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 20:45:07.487936 (kubelet)[3141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 20:45:07.533549 kubelet[3141]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 20:45:07.533549 kubelet[3141]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 20:45:07.533549 kubelet[3141]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 20:45:10.658666 kubelet[3141]: I0213 20:45:07.533544    3141 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 20:45:10.658666 kubelet[3141]: I0213 20:45:07.538745    3141 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Feb 13 20:45:10.658666 kubelet[3141]: I0213 20:45:07.538771    3141 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 20:45:10.658666 kubelet[3141]: I0213 20:45:07.539007    3141 server.go:929] "Client rotation is on, will bootstrap in background"
Feb 13 20:45:10.660871 kubelet[3141]: I0213 20:45:10.660838    3141 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 13 20:45:10.666459 kubelet[3141]: I0213 20:45:10.666417    3141 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 20:45:10.670022 kubelet[3141]: E0213 20:45:10.669962    3141 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Feb 13 20:45:10.670022 kubelet[3141]: I0213 20:45:10.670018    3141 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Feb 13 20:45:10.674350 kubelet[3141]: I0213 20:45:10.673202    3141 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 20:45:10.674350 kubelet[3141]: I0213 20:45:10.673387    3141 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Feb 13 20:45:10.674350 kubelet[3141]: I0213 20:45:10.673514    3141 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 20:45:10.674350 kubelet[3141]: I0213 20:45:10.673545    3141 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-d3f644b76a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Feb 13 20:45:10.674588 kubelet[3141]: I0213 20:45:10.673851    3141 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 20:45:10.674588 kubelet[3141]: I0213 20:45:10.673861    3141 container_manager_linux.go:300] "Creating device plugin manager"
Feb 13 20:45:10.674588 kubelet[3141]: I0213 20:45:10.673905    3141 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 20:45:10.674588 kubelet[3141]: I0213 20:45:10.674042    3141 kubelet.go:408] "Attempting to sync node with API server"
Feb 13 20:45:10.674588 kubelet[3141]: I0213 20:45:10.674055    3141 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 20:45:10.674588 kubelet[3141]: I0213 20:45:10.674077    3141 kubelet.go:314] "Adding apiserver pod source"
Feb 13 20:45:10.674588 kubelet[3141]: I0213 20:45:10.674088    3141 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 20:45:10.676958 kubelet[3141]: I0213 20:45:10.676824    3141 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Feb 13 20:45:10.678095 kubelet[3141]: I0213 20:45:10.678062    3141 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 20:45:10.682344 kubelet[3141]: I0213 20:45:10.682089    3141 server.go:1269] "Started kubelet"
Feb 13 20:45:10.684030 kubelet[3141]: I0213 20:45:10.683447    3141 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 20:45:10.685177 kubelet[3141]: I0213 20:45:10.684422    3141 server.go:460] "Adding debug handlers to kubelet server"
Feb 13 20:45:10.686636 kubelet[3141]: I0213 20:45:10.686584    3141 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 20:45:10.686971 kubelet[3141]: I0213 20:45:10.686931    3141 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 20:45:10.687807 kubelet[3141]: I0213 20:45:10.687697    3141 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 20:45:10.688506 kubelet[3141]: I0213 20:45:10.688474    3141 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Feb 13 20:45:10.689129 kubelet[3141]: I0213 20:45:10.689012    3141 volume_manager.go:289] "Starting Kubelet Volume Manager"
Feb 13 20:45:10.689279 kubelet[3141]: E0213 20:45:10.689214    3141 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-d3f644b76a\" not found"
Feb 13 20:45:10.690772 kubelet[3141]: I0213 20:45:10.690389    3141 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Feb 13 20:45:10.690772 kubelet[3141]: I0213 20:45:10.690509    3141 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 20:45:10.695554 kubelet[3141]: I0213 20:45:10.694118    3141 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 20:45:10.700678 kubelet[3141]: I0213 20:45:10.700653    3141 factory.go:221] Registration of the containerd container factory successfully
Feb 13 20:45:10.701297 kubelet[3141]: I0213 20:45:10.700971    3141 factory.go:221] Registration of the systemd container factory successfully
Feb 13 20:45:10.701797 kubelet[3141]: E0213 20:45:10.701505    3141 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 20:45:10.737525 kubelet[3141]: I0213 20:45:10.737262    3141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 20:45:10.740325 kubelet[3141]: I0213 20:45:10.740029    3141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 20:45:10.740325 kubelet[3141]: I0213 20:45:10.740069    3141 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 20:45:10.740325 kubelet[3141]: I0213 20:45:10.740094    3141 kubelet.go:2321] "Starting kubelet main sync loop"
Feb 13 20:45:10.740325 kubelet[3141]: E0213 20:45:10.740136    3141 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 20:45:10.775881 kubelet[3141]: I0213 20:45:10.775856    3141 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 20:45:10.776403 kubelet[3141]: I0213 20:45:10.776096    3141 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 20:45:10.776403 kubelet[3141]: I0213 20:45:10.776122    3141 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 20:45:10.776403 kubelet[3141]: I0213 20:45:10.776284    3141 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 13 20:45:10.776403 kubelet[3141]: I0213 20:45:10.776294    3141 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 13 20:45:10.776403 kubelet[3141]: I0213 20:45:10.776312    3141 policy_none.go:49] "None policy: Start"
Feb 13 20:45:10.777967 kubelet[3141]: I0213 20:45:10.777214    3141 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 20:45:10.777967 kubelet[3141]: I0213 20:45:10.777239    3141 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 20:45:10.777967 kubelet[3141]: I0213 20:45:10.777443    3141 state_mem.go:75] "Updated machine memory state"
Feb 13 20:45:10.781816 kubelet[3141]: I0213 20:45:10.781793    3141 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 20:45:10.782371 kubelet[3141]: I0213 20:45:10.782326    3141 eviction_manager.go:189] "Eviction manager: starting control loop"
Feb 13 20:45:10.782493 kubelet[3141]: I0213 20:45:10.782458    3141 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 20:45:10.783151 kubelet[3141]: I0213 20:45:10.783133    3141 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 20:45:10.858956 kubelet[3141]: W0213 20:45:10.858854    3141 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 20:45:10.863089 kubelet[3141]: W0213 20:45:10.862916    3141 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 20:45:10.863714 kubelet[3141]: W0213 20:45:10.862860    3141 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 20:45:10.885303 kubelet[3141]: I0213 20:45:10.885278    3141 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.892793 kubelet[3141]: I0213 20:45:10.892723    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a64d6397b114d75a7b6f3f869a60fb1f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-d3f644b76a\" (UID: \"a64d6397b114d75a7b6f3f869a60fb1f\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.892793 kubelet[3141]: I0213 20:45:10.892778    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf23dab1765bc86e527202369ce1850c-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-d3f644b76a\" (UID: \"bf23dab1765bc86e527202369ce1850c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.892793 kubelet[3141]: I0213 20:45:10.892825    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf23dab1765bc86e527202369ce1850c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-d3f644b76a\" (UID: \"bf23dab1765bc86e527202369ce1850c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.892793 kubelet[3141]: I0213 20:45:10.892848    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.892793 kubelet[3141]: I0213 20:45:10.892865    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.893291 kubelet[3141]: I0213 20:45:10.892880    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf23dab1765bc86e527202369ce1850c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-d3f644b76a\" (UID: \"bf23dab1765bc86e527202369ce1850c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.893291 kubelet[3141]: I0213 20:45:10.892894    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.893291 kubelet[3141]: I0213 20:45:10.892931    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.893291 kubelet[3141]: I0213 20:45:10.892959    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f397481b8c83694a691939b3ae022018-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-d3f644b76a\" (UID: \"f397481b8c83694a691939b3ae022018\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.896406 kubelet[3141]: I0213 20:45:10.896370    3141 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:10.896603 kubelet[3141]: I0213 20:45:10.896570    3141 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:11.681858 kubelet[3141]: I0213 20:45:11.681602    3141 apiserver.go:52] "Watching apiserver"
Feb 13 20:45:11.790827 kubelet[3141]: I0213 20:45:11.790763    3141 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Feb 13 20:45:11.834501 kubelet[3141]: I0213 20:45:11.834007    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-a-d3f644b76a" podStartSLOduration=1.83398709 podStartE2EDuration="1.83398709s" podCreationTimestamp="2025-02-13 20:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:11.81981291 +0000 UTC m=+4.327630055" watchObservedRunningTime="2025-02-13 20:45:11.83398709 +0000 UTC m=+4.341804315"
Feb 13 20:45:11.867109 kubelet[3141]: I0213 20:45:11.867039    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-a-d3f644b76a" podStartSLOduration=1.867022618 podStartE2EDuration="1.867022618s" podCreationTimestamp="2025-02-13 20:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:11.834372731 +0000 UTC m=+4.342189876" watchObservedRunningTime="2025-02-13 20:45:11.867022618 +0000 UTC m=+4.374839763"
Feb 13 20:45:11.897994 kubelet[3141]: I0213 20:45:11.897934    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-d3f644b76a" podStartSLOduration=1.897914502 podStartE2EDuration="1.897914502s" podCreationTimestamp="2025-02-13 20:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:11.867949659 +0000 UTC m=+4.375766804" watchObservedRunningTime="2025-02-13 20:45:11.897914502 +0000 UTC m=+4.405731647"
Feb 13 20:45:12.990198 kubelet[3141]: I0213 20:45:12.990156    3141 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 13 20:45:12.990658 containerd[1726]: time="2025-02-13T20:45:12.990612349Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 20:45:12.991021 kubelet[3141]: I0213 20:45:12.990972    3141 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 13 20:45:13.994096 systemd[1]: Created slice kubepods-besteffort-podda38a4d3_6002_48be_bc85_68f46793836b.slice - libcontainer container kubepods-besteffort-podda38a4d3_6002_48be_bc85_68f46793836b.slice.
Feb 13 20:45:14.010542 kubelet[3141]: I0213 20:45:14.010449    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da38a4d3-6002-48be-bc85-68f46793836b-kube-proxy\") pod \"kube-proxy-c9wz7\" (UID: \"da38a4d3-6002-48be-bc85-68f46793836b\") " pod="kube-system/kube-proxy-c9wz7"
Feb 13 20:45:14.010542 kubelet[3141]: I0213 20:45:14.010496    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgpt\" (UniqueName: \"kubernetes.io/projected/da38a4d3-6002-48be-bc85-68f46793836b-kube-api-access-7kgpt\") pod \"kube-proxy-c9wz7\" (UID: \"da38a4d3-6002-48be-bc85-68f46793836b\") " pod="kube-system/kube-proxy-c9wz7"
Feb 13 20:45:14.010542 kubelet[3141]: I0213 20:45:14.010518    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da38a4d3-6002-48be-bc85-68f46793836b-xtables-lock\") pod \"kube-proxy-c9wz7\" (UID: \"da38a4d3-6002-48be-bc85-68f46793836b\") " pod="kube-system/kube-proxy-c9wz7"
Feb 13 20:45:14.010542 kubelet[3141]: I0213 20:45:14.010532    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da38a4d3-6002-48be-bc85-68f46793836b-lib-modules\") pod \"kube-proxy-c9wz7\" (UID: \"da38a4d3-6002-48be-bc85-68f46793836b\") " pod="kube-system/kube-proxy-c9wz7"
Feb 13 20:45:14.085654 systemd[1]: Created slice kubepods-besteffort-podf1f792e8_a376_41d1_b5e4_73bdf823584d.slice - libcontainer container kubepods-besteffort-podf1f792e8_a376_41d1_b5e4_73bdf823584d.slice.
Feb 13 20:45:14.111065 kubelet[3141]: I0213 20:45:14.111018    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f1f792e8-a376-41d1-b5e4-73bdf823584d-var-lib-calico\") pod \"tigera-operator-76c4976dd7-tffqz\" (UID: \"f1f792e8-a376-41d1-b5e4-73bdf823584d\") " pod="tigera-operator/tigera-operator-76c4976dd7-tffqz"
Feb 13 20:45:14.111065 kubelet[3141]: I0213 20:45:14.111062    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9m49\" (UniqueName: \"kubernetes.io/projected/f1f792e8-a376-41d1-b5e4-73bdf823584d-kube-api-access-s9m49\") pod \"tigera-operator-76c4976dd7-tffqz\" (UID: \"f1f792e8-a376-41d1-b5e4-73bdf823584d\") " pod="tigera-operator/tigera-operator-76c4976dd7-tffqz"
Feb 13 20:45:14.302007 containerd[1726]: time="2025-02-13T20:45:14.301888230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9wz7,Uid:da38a4d3-6002-48be-bc85-68f46793836b,Namespace:kube-system,Attempt:0,}"
Feb 13 20:45:14.390386 containerd[1726]: time="2025-02-13T20:45:14.390281877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-tffqz,Uid:f1f792e8-a376-41d1-b5e4-73bdf823584d,Namespace:tigera-operator,Attempt:0,}"
Feb 13 20:45:15.035473 containerd[1726]: time="2025-02-13T20:45:15.035361842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:15.035473 containerd[1726]: time="2025-02-13T20:45:15.035421483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:15.035473 containerd[1726]: time="2025-02-13T20:45:15.035449603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:15.035890 containerd[1726]: time="2025-02-13T20:45:15.035544203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:15.057786 systemd[1]: Started cri-containerd-58faacfdfb074f5eab7241a105b715a6945f9451bcacfc1afd33804e2860f88d.scope - libcontainer container 58faacfdfb074f5eab7241a105b715a6945f9451bcacfc1afd33804e2860f88d.
Feb 13 20:45:15.084549 containerd[1726]: time="2025-02-13T20:45:15.084241073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:15.084549 containerd[1726]: time="2025-02-13T20:45:15.084317753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:15.084549 containerd[1726]: time="2025-02-13T20:45:15.084596113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:15.085445 containerd[1726]: time="2025-02-13T20:45:15.085393514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:15.086525 containerd[1726]: time="2025-02-13T20:45:15.086479556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9wz7,Uid:da38a4d3-6002-48be-bc85-68f46793836b,Namespace:kube-system,Attempt:0,} returns sandbox id \"58faacfdfb074f5eab7241a105b715a6945f9451bcacfc1afd33804e2860f88d\""
Feb 13 20:45:15.101748 containerd[1726]: time="2025-02-13T20:45:15.101594817Z" level=info msg="CreateContainer within sandbox \"58faacfdfb074f5eab7241a105b715a6945f9451bcacfc1afd33804e2860f88d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 20:45:15.106810 systemd[1]: Started cri-containerd-0732ddc011c15d3b491e756ddc5b8e00081c1271908da9181ddb48bcfbbf7d49.scope - libcontainer container 0732ddc011c15d3b491e756ddc5b8e00081c1271908da9181ddb48bcfbbf7d49.
Feb 13 20:45:15.140522 containerd[1726]: time="2025-02-13T20:45:15.140399473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-tffqz,Uid:f1f792e8-a376-41d1-b5e4-73bdf823584d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0732ddc011c15d3b491e756ddc5b8e00081c1271908da9181ddb48bcfbbf7d49\""
Feb 13 20:45:15.157444 containerd[1726]: time="2025-02-13T20:45:15.146005081Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\""
Feb 13 20:45:15.868520 sudo[2224]: pam_unix(sudo:session): session closed for user root
Feb 13 20:45:15.939189 sshd[2221]: pam_unix(sshd:session): session closed for user core
Feb 13 20:45:15.941819 systemd[1]: sshd@6-10.200.20.20:22-10.200.16.10:33074.service: Deactivated successfully.
Feb 13 20:45:15.943888 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 20:45:15.944139 systemd[1]: session-9.scope: Consumed 5.551s CPU time, 152.9M memory peak, 0B memory swap peak.
Feb 13 20:45:15.945579 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit.
Feb 13 20:45:15.947002 systemd-logind[1698]: Removed session 9.
Feb 13 20:45:18.363273 containerd[1726]: time="2025-02-13T20:45:18.363198935Z" level=info msg="CreateContainer within sandbox \"58faacfdfb074f5eab7241a105b715a6945f9451bcacfc1afd33804e2860f88d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2416cd4183b6b6fb7617d4066fe12a87bb8b4c11338c3eba05b2c04460767750\""
Feb 13 20:45:18.363979 containerd[1726]: time="2025-02-13T20:45:18.363765295Z" level=info msg="StartContainer for \"2416cd4183b6b6fb7617d4066fe12a87bb8b4c11338c3eba05b2c04460767750\""
Feb 13 20:45:18.401542 systemd[1]: Started cri-containerd-2416cd4183b6b6fb7617d4066fe12a87bb8b4c11338c3eba05b2c04460767750.scope - libcontainer container 2416cd4183b6b6fb7617d4066fe12a87bb8b4c11338c3eba05b2c04460767750.
Feb 13 20:45:18.506277 containerd[1726]: time="2025-02-13T20:45:18.506039172Z" level=info msg="StartContainer for \"2416cd4183b6b6fb7617d4066fe12a87bb8b4c11338c3eba05b2c04460767750\" returns successfully"
Feb 13 20:45:19.260221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037068873.mount: Deactivated successfully.
Feb 13 20:45:19.522364 kubelet[3141]: I0213 20:45:19.521985    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c9wz7" podStartSLOduration=6.521968899 podStartE2EDuration="6.521968899s" podCreationTimestamp="2025-02-13 20:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:18.790277766 +0000 UTC m=+11.298094911" watchObservedRunningTime="2025-02-13 20:45:19.521968899 +0000 UTC m=+12.029786004"
Feb 13 20:45:19.627366 containerd[1726]: time="2025-02-13T20:45:19.626579084Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:19.629182 containerd[1726]: time="2025-02-13T20:45:19.629146968Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160"
Feb 13 20:45:19.631918 containerd[1726]: time="2025-02-13T20:45:19.631885771Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:19.636311 containerd[1726]: time="2025-02-13T20:45:19.636273537Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:19.637247 containerd[1726]: time="2025-02-13T20:45:19.636901538Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 4.490859297s"
Feb 13 20:45:19.637661 containerd[1726]: time="2025-02-13T20:45:19.637641619Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\""
Feb 13 20:45:19.639928 containerd[1726]: time="2025-02-13T20:45:19.639750102Z" level=info msg="CreateContainer within sandbox \"0732ddc011c15d3b491e756ddc5b8e00081c1271908da9181ddb48bcfbbf7d49\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Feb 13 20:45:19.679899 containerd[1726]: time="2025-02-13T20:45:19.679855918Z" level=info msg="CreateContainer within sandbox \"0732ddc011c15d3b491e756ddc5b8e00081c1271908da9181ddb48bcfbbf7d49\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a90fcd84c4179b9b5a2125c5c2a2be973342390d841c6467629e0960a1bd930c\""
Feb 13 20:45:19.680683 containerd[1726]: time="2025-02-13T20:45:19.680602079Z" level=info msg="StartContainer for \"a90fcd84c4179b9b5a2125c5c2a2be973342390d841c6467629e0960a1bd930c\""
Feb 13 20:45:19.705518 systemd[1]: Started cri-containerd-a90fcd84c4179b9b5a2125c5c2a2be973342390d841c6467629e0960a1bd930c.scope - libcontainer container a90fcd84c4179b9b5a2125c5c2a2be973342390d841c6467629e0960a1bd930c.
Feb 13 20:45:19.731391 containerd[1726]: time="2025-02-13T20:45:19.731273509Z" level=info msg="StartContainer for \"a90fcd84c4179b9b5a2125c5c2a2be973342390d841c6467629e0960a1bd930c\" returns successfully"
Feb 13 20:45:19.803749 kubelet[3141]: I0213 20:45:19.803601    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-tffqz" podStartSLOduration=1.307017944 podStartE2EDuration="5.803582849s" podCreationTimestamp="2025-02-13 20:45:14 +0000 UTC" firstStartedPulling="2025-02-13 20:45:15.141879395 +0000 UTC m=+7.649696540" lastFinishedPulling="2025-02-13 20:45:19.6384443 +0000 UTC m=+12.146261445" observedRunningTime="2025-02-13 20:45:19.803480089 +0000 UTC m=+12.311297234" watchObservedRunningTime="2025-02-13 20:45:19.803582849 +0000 UTC m=+12.311399994"
Feb 13 20:45:23.726504 systemd[1]: Created slice kubepods-besteffort-podfda143c3_44cc_4679_9ef1_3741ce1018e0.slice - libcontainer container kubepods-besteffort-podfda143c3_44cc_4679_9ef1_3741ce1018e0.slice.
Feb 13 20:45:23.833922 kubelet[3141]: W0213 20:45:23.833878    3141 reflector.go:561] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4081.3.1-a-d3f644b76a" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.1-a-d3f644b76a' and this object
Feb 13 20:45:23.834396 kubelet[3141]: E0213 20:45:23.833923    3141 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"cni-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:ci-4081.3.1-a-d3f644b76a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.1-a-d3f644b76a' and this object" logger="UnhandledError"
Feb 13 20:45:23.834396 kubelet[3141]: W0213 20:45:23.833994    3141 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081.3.1-a-d3f644b76a" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.1-a-d3f644b76a' and this object
Feb 13 20:45:23.834396 kubelet[3141]: E0213 20:45:23.834008    3141 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ci-4081.3.1-a-d3f644b76a\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.1-a-d3f644b76a' and this object" logger="UnhandledError"
Feb 13 20:45:23.838435 systemd[1]: Created slice kubepods-besteffort-pod6167cb6e_893d_4d13_98d0_d9e62970ff54.slice - libcontainer container kubepods-besteffort-pod6167cb6e_893d_4d13_98d0_d9e62970ff54.slice.
Feb 13 20:45:23.869915 kubelet[3141]: I0213 20:45:23.869812    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fda143c3-44cc-4679-9ef1-3741ce1018e0-tigera-ca-bundle\") pod \"calico-typha-7bbb679784-4mvfr\" (UID: \"fda143c3-44cc-4679-9ef1-3741ce1018e0\") " pod="calico-system/calico-typha-7bbb679784-4mvfr"
Feb 13 20:45:23.869915 kubelet[3141]: I0213 20:45:23.869861    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fda143c3-44cc-4679-9ef1-3741ce1018e0-typha-certs\") pod \"calico-typha-7bbb679784-4mvfr\" (UID: \"fda143c3-44cc-4679-9ef1-3741ce1018e0\") " pod="calico-system/calico-typha-7bbb679784-4mvfr"
Feb 13 20:45:23.869915 kubelet[3141]: I0213 20:45:23.869881    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lsml\" (UniqueName: \"kubernetes.io/projected/fda143c3-44cc-4679-9ef1-3741ce1018e0-kube-api-access-7lsml\") pod \"calico-typha-7bbb679784-4mvfr\" (UID: \"fda143c3-44cc-4679-9ef1-3741ce1018e0\") " pod="calico-system/calico-typha-7bbb679784-4mvfr"
Feb 13 20:45:23.949389 kubelet[3141]: E0213 20:45:23.949098    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9m58" podUID="22e5305d-78c3-49cd-bb6d-90df1e2b864e"
Feb 13 20:45:23.970306 kubelet[3141]: I0213 20:45:23.970161    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6167cb6e-893d-4d13-98d0-d9e62970ff54-flexvol-driver-host\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.970306 kubelet[3141]: I0213 20:45:23.970209    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6167cb6e-893d-4d13-98d0-d9e62970ff54-node-certs\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.970306 kubelet[3141]: I0213 20:45:23.970228    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6167cb6e-893d-4d13-98d0-d9e62970ff54-cni-bin-dir\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.970306 kubelet[3141]: I0213 20:45:23.970287    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6167cb6e-893d-4d13-98d0-d9e62970ff54-xtables-lock\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.970306 kubelet[3141]: I0213 20:45:23.970302    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6167cb6e-893d-4d13-98d0-d9e62970ff54-var-run-calico\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.970903 kubelet[3141]: I0213 20:45:23.970317    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6167cb6e-893d-4d13-98d0-d9e62970ff54-policysync\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.970903 kubelet[3141]: I0213 20:45:23.970350    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6167cb6e-893d-4d13-98d0-d9e62970ff54-var-lib-calico\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.970903 kubelet[3141]: I0213 20:45:23.970382    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6167cb6e-893d-4d13-98d0-d9e62970ff54-tigera-ca-bundle\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.970903 kubelet[3141]: I0213 20:45:23.970399    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6167cb6e-893d-4d13-98d0-d9e62970ff54-lib-modules\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.970903 kubelet[3141]: I0213 20:45:23.970414    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6167cb6e-893d-4d13-98d0-d9e62970ff54-cni-net-dir\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.971086 kubelet[3141]: I0213 20:45:23.970428    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flvrl\" (UniqueName: \"kubernetes.io/projected/6167cb6e-893d-4d13-98d0-d9e62970ff54-kube-api-access-flvrl\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:23.971086 kubelet[3141]: I0213 20:45:23.970443    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6167cb6e-893d-4d13-98d0-d9e62970ff54-cni-log-dir\") pod \"calico-node-mmrw5\" (UID: \"6167cb6e-893d-4d13-98d0-d9e62970ff54\") " pod="calico-system/calico-node-mmrw5"
Feb 13 20:45:24.031429 containerd[1726]: time="2025-02-13T20:45:24.031041303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bbb679784-4mvfr,Uid:fda143c3-44cc-4679-9ef1-3741ce1018e0,Namespace:calico-system,Attempt:0,}"
Feb 13 20:45:24.073613 kubelet[3141]: I0213 20:45:24.073141    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22e5305d-78c3-49cd-bb6d-90df1e2b864e-kubelet-dir\") pod \"csi-node-driver-z9m58\" (UID: \"22e5305d-78c3-49cd-bb6d-90df1e2b864e\") " pod="calico-system/csi-node-driver-z9m58"
Feb 13 20:45:24.073613 kubelet[3141]: I0213 20:45:24.073214    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9pg\" (UniqueName: \"kubernetes.io/projected/22e5305d-78c3-49cd-bb6d-90df1e2b864e-kube-api-access-5n9pg\") pod \"csi-node-driver-z9m58\" (UID: \"22e5305d-78c3-49cd-bb6d-90df1e2b864e\") " pod="calico-system/csi-node-driver-z9m58"
Feb 13 20:45:24.073613 kubelet[3141]: I0213 20:45:24.073266    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/22e5305d-78c3-49cd-bb6d-90df1e2b864e-varrun\") pod \"csi-node-driver-z9m58\" (UID: \"22e5305d-78c3-49cd-bb6d-90df1e2b864e\") " pod="calico-system/csi-node-driver-z9m58"
Feb 13 20:45:24.073613 kubelet[3141]: I0213 20:45:24.073282    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/22e5305d-78c3-49cd-bb6d-90df1e2b864e-registration-dir\") pod \"csi-node-driver-z9m58\" (UID: \"22e5305d-78c3-49cd-bb6d-90df1e2b864e\") " pod="calico-system/csi-node-driver-z9m58"
Feb 13 20:45:24.073613 kubelet[3141]: I0213 20:45:24.073309    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/22e5305d-78c3-49cd-bb6d-90df1e2b864e-socket-dir\") pod \"csi-node-driver-z9m58\" (UID: \"22e5305d-78c3-49cd-bb6d-90df1e2b864e\") " pod="calico-system/csi-node-driver-z9m58"
Feb 13 20:45:24.081375 kubelet[3141]: E0213 20:45:24.079807    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.081375 kubelet[3141]: W0213 20:45:24.079841    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.081375 kubelet[3141]: E0213 20:45:24.079880    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.081375 kubelet[3141]: E0213 20:45:24.080076    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.081375 kubelet[3141]: W0213 20:45:24.080085    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.081375 kubelet[3141]: E0213 20:45:24.080101    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.081375 kubelet[3141]: E0213 20:45:24.080389    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.081375 kubelet[3141]: W0213 20:45:24.080408    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.081375 kubelet[3141]: E0213 20:45:24.080422    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.082575 kubelet[3141]: E0213 20:45:24.082079    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.082575 kubelet[3141]: W0213 20:45:24.082117    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.082575 kubelet[3141]: E0213 20:45:24.082136    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.085984 kubelet[3141]: E0213 20:45:24.085555    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.085984 kubelet[3141]: W0213 20:45:24.085583    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.085984 kubelet[3141]: E0213 20:45:24.085611    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.089886 kubelet[3141]: E0213 20:45:24.089457    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.089886 kubelet[3141]: W0213 20:45:24.089481    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.089886 kubelet[3141]: E0213 20:45:24.089501    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.090593 kubelet[3141]: E0213 20:45:24.090416    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.090593 kubelet[3141]: W0213 20:45:24.090437    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.090593 kubelet[3141]: E0213 20:45:24.090453    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.093699 kubelet[3141]: E0213 20:45:24.091429    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.093699 kubelet[3141]: W0213 20:45:24.091448    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.093699 kubelet[3141]: E0213 20:45:24.091467    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.108710 containerd[1726]: time="2025-02-13T20:45:24.107799849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:24.108710 containerd[1726]: time="2025-02-13T20:45:24.107865770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:24.108710 containerd[1726]: time="2025-02-13T20:45:24.107877290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:24.108710 containerd[1726]: time="2025-02-13T20:45:24.107966370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:24.120884 kubelet[3141]: E0213 20:45:24.120451    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.120884 kubelet[3141]: W0213 20:45:24.120502    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.120884 kubelet[3141]: E0213 20:45:24.120524    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.137537 systemd[1]: Started cri-containerd-2f7d7c1ef621ab25d9c49fcc1e5189a97c78787fd4c389568ec74264793793ec.scope - libcontainer container 2f7d7c1ef621ab25d9c49fcc1e5189a97c78787fd4c389568ec74264793793ec.
Feb 13 20:45:24.174737 kubelet[3141]: E0213 20:45:24.174579    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.174737 kubelet[3141]: W0213 20:45:24.174603    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.174737 kubelet[3141]: E0213 20:45:24.174624    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.175359 kubelet[3141]: E0213 20:45:24.175248    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.175359 kubelet[3141]: W0213 20:45:24.175275    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.175359 kubelet[3141]: E0213 20:45:24.175297    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.175548 kubelet[3141]: E0213 20:45:24.175526    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.175548 kubelet[3141]: W0213 20:45:24.175539    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.175548 kubelet[3141]: E0213 20:45:24.175557    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.176263 kubelet[3141]: E0213 20:45:24.176087    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.176263 kubelet[3141]: W0213 20:45:24.176261    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.176563 kubelet[3141]: E0213 20:45:24.176539    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.177762 kubelet[3141]: E0213 20:45:24.177636    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.177762 kubelet[3141]: W0213 20:45:24.177688    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.178221 kubelet[3141]: E0213 20:45:24.177976    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.178662 kubelet[3141]: E0213 20:45:24.178463    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.178662 kubelet[3141]: W0213 20:45:24.178583    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.178746 containerd[1726]: time="2025-02-13T20:45:24.178669708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bbb679784-4mvfr,Uid:fda143c3-44cc-4679-9ef1-3741ce1018e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f7d7c1ef621ab25d9c49fcc1e5189a97c78787fd4c389568ec74264793793ec\""
Feb 13 20:45:24.179042 kubelet[3141]: E0213 20:45:24.178814    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.179042 kubelet[3141]: E0213 20:45:24.178938    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.179042 kubelet[3141]: W0213 20:45:24.178949    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.179042 kubelet[3141]: E0213 20:45:24.179014    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.179472 kubelet[3141]: E0213 20:45:24.179308    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.179472 kubelet[3141]: W0213 20:45:24.179360    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.179472 kubelet[3141]: E0213 20:45:24.179381    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.179810 kubelet[3141]: E0213 20:45:24.179761    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.179957 kubelet[3141]: W0213 20:45:24.179875    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.179957 kubelet[3141]: E0213 20:45:24.179900    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.180293 kubelet[3141]: E0213 20:45:24.180245    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.180293 kubelet[3141]: W0213 20:45:24.180261    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.180545 kubelet[3141]: E0213 20:45:24.180365    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.180843 kubelet[3141]: E0213 20:45:24.180710    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.180843 kubelet[3141]: W0213 20:45:24.180726    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.180843 kubelet[3141]: E0213 20:45:24.180747    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.181058 containerd[1726]: time="2025-02-13T20:45:24.181027831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\""
Feb 13 20:45:24.181208 kubelet[3141]: E0213 20:45:24.181133    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.181208 kubelet[3141]: W0213 20:45:24.181146    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.181459 kubelet[3141]: E0213 20:45:24.181263    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.181729 kubelet[3141]: E0213 20:45:24.181670    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.181729 kubelet[3141]: W0213 20:45:24.181687    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.181729 kubelet[3141]: E0213 20:45:24.181699    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.182201 kubelet[3141]: E0213 20:45:24.182045    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.182201 kubelet[3141]: W0213 20:45:24.182058    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.182201 kubelet[3141]: E0213 20:45:24.182176    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.182731 kubelet[3141]: E0213 20:45:24.182466    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.182731 kubelet[3141]: W0213 20:45:24.182490    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.182731 kubelet[3141]: E0213 20:45:24.182616    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.182731 kubelet[3141]: E0213 20:45:24.182649    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.182731 kubelet[3141]: W0213 20:45:24.182657    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.183108 kubelet[3141]: E0213 20:45:24.182795    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.183108 kubelet[3141]: E0213 20:45:24.183015    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.183108 kubelet[3141]: W0213 20:45:24.183026    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.183355 kubelet[3141]: E0213 20:45:24.183151    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.183452 kubelet[3141]: E0213 20:45:24.183436    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.183562 kubelet[3141]: W0213 20:45:24.183531    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.183720 kubelet[3141]: E0213 20:45:24.183646    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.184020 kubelet[3141]: E0213 20:45:24.183891    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.184020 kubelet[3141]: W0213 20:45:24.183905    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.184020 kubelet[3141]: E0213 20:45:24.183928    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.184193 kubelet[3141]: E0213 20:45:24.184178    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.184414 kubelet[3141]: W0213 20:45:24.184228    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.184414 kubelet[3141]: E0213 20:45:24.184264    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.184566 kubelet[3141]: E0213 20:45:24.184549    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.184711 kubelet[3141]: W0213 20:45:24.184630    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.184711 kubelet[3141]: E0213 20:45:24.184677    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.185028 kubelet[3141]: E0213 20:45:24.184968    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.185028 kubelet[3141]: W0213 20:45:24.184984    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.185028 kubelet[3141]: E0213 20:45:24.185013    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.185510 kubelet[3141]: E0213 20:45:24.185380    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.185510 kubelet[3141]: W0213 20:45:24.185421    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.185510 kubelet[3141]: E0213 20:45:24.185450    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.185693 kubelet[3141]: E0213 20:45:24.185678    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.185745 kubelet[3141]: W0213 20:45:24.185733    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.185817 kubelet[3141]: E0213 20:45:24.185805    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.186074 kubelet[3141]: E0213 20:45:24.186035    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.186074 kubelet[3141]: W0213 20:45:24.186070    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.186158 kubelet[3141]: E0213 20:45:24.186089    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.186969 kubelet[3141]: E0213 20:45:24.186885    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.186969 kubelet[3141]: W0213 20:45:24.186906    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.186969 kubelet[3141]: E0213 20:45:24.186924    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.197788 kubelet[3141]: E0213 20:45:24.197757    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.197788 kubelet[3141]: W0213 20:45:24.197779    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.198033 kubelet[3141]: E0213 20:45:24.197797    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.282005 kubelet[3141]: E0213 20:45:24.281898    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.282005 kubelet[3141]: W0213 20:45:24.281923    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.282005 kubelet[3141]: E0213 20:45:24.281945    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.383362 kubelet[3141]: E0213 20:45:24.383255    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.383362 kubelet[3141]: W0213 20:45:24.383279    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.383362 kubelet[3141]: E0213 20:45:24.383299    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.484453 kubelet[3141]: E0213 20:45:24.484420    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.484453 kubelet[3141]: W0213 20:45:24.484446    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.484625 kubelet[3141]: E0213 20:45:24.484466    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.586076 kubelet[3141]: E0213 20:45:24.586014    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.586076 kubelet[3141]: W0213 20:45:24.586037    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.586368 kubelet[3141]: E0213 20:45:24.586150    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.687144 kubelet[3141]: E0213 20:45:24.687056    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.687144 kubelet[3141]: W0213 20:45:24.687075    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.687144 kubelet[3141]: E0213 20:45:24.687096    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:24.747493 kubelet[3141]: E0213 20:45:24.747430    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:24.747493 kubelet[3141]: W0213 20:45:24.747450    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:24.747824 kubelet[3141]: E0213 20:45:24.747468    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:25.044064 containerd[1726]: time="2025-02-13T20:45:25.043937521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mmrw5,Uid:6167cb6e-893d-4d13-98d0-d9e62970ff54,Namespace:calico-system,Attempt:0,}"
Feb 13 20:45:25.093896 containerd[1726]: time="2025-02-13T20:45:25.093756666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:25.093896 containerd[1726]: time="2025-02-13T20:45:25.093839186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:25.093896 containerd[1726]: time="2025-02-13T20:45:25.093850226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:25.094261 containerd[1726]: time="2025-02-13T20:45:25.093945706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:25.115954 systemd[1]: run-containerd-runc-k8s.io-bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db-runc.wSlKNm.mount: Deactivated successfully.
Feb 13 20:45:25.123624 systemd[1]: Started cri-containerd-bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db.scope - libcontainer container bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db.
Feb 13 20:45:25.157122 containerd[1726]: time="2025-02-13T20:45:25.157068429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mmrw5,Uid:6167cb6e-893d-4d13-98d0-d9e62970ff54,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db\""
Feb 13 20:45:25.741178 kubelet[3141]: E0213 20:45:25.741133    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9m58" podUID="22e5305d-78c3-49cd-bb6d-90df1e2b864e"
Feb 13 20:45:26.497068 containerd[1726]: time="2025-02-13T20:45:26.497009538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:26.502900 containerd[1726]: time="2025-02-13T20:45:26.502719425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308"
Feb 13 20:45:26.507742 containerd[1726]: time="2025-02-13T20:45:26.507680312Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:26.514963 containerd[1726]: time="2025-02-13T20:45:26.514907401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:26.515807 containerd[1726]: time="2025-02-13T20:45:26.515681842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.334619811s"
Feb 13 20:45:26.515807 containerd[1726]: time="2025-02-13T20:45:26.515717082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\""
Feb 13 20:45:26.518083 containerd[1726]: time="2025-02-13T20:45:26.517860285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Feb 13 20:45:26.531278 containerd[1726]: time="2025-02-13T20:45:26.531222062Z" level=info msg="CreateContainer within sandbox \"2f7d7c1ef621ab25d9c49fcc1e5189a97c78787fd4c389568ec74264793793ec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Feb 13 20:45:26.581792 containerd[1726]: time="2025-02-13T20:45:26.581730288Z" level=info msg="CreateContainer within sandbox \"2f7d7c1ef621ab25d9c49fcc1e5189a97c78787fd4c389568ec74264793793ec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"db6b30302d15ff6c872a93c9ae25e8a4ed44f72a5174d3bf452f2f5f3aa421ac\""
Feb 13 20:45:26.582391 containerd[1726]: time="2025-02-13T20:45:26.582364929Z" level=info msg="StartContainer for \"db6b30302d15ff6c872a93c9ae25e8a4ed44f72a5174d3bf452f2f5f3aa421ac\""
Feb 13 20:45:26.607564 systemd[1]: Started cri-containerd-db6b30302d15ff6c872a93c9ae25e8a4ed44f72a5174d3bf452f2f5f3aa421ac.scope - libcontainer container db6b30302d15ff6c872a93c9ae25e8a4ed44f72a5174d3bf452f2f5f3aa421ac.
Feb 13 20:45:26.651412 containerd[1726]: time="2025-02-13T20:45:26.651358219Z" level=info msg="StartContainer for \"db6b30302d15ff6c872a93c9ae25e8a4ed44f72a5174d3bf452f2f5f3aa421ac\" returns successfully"
Feb 13 20:45:26.891914 kubelet[3141]: E0213 20:45:26.891881    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.892673 kubelet[3141]: W0213 20:45:26.892290    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.892673 kubelet[3141]: E0213 20:45:26.892325    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.892673 kubelet[3141]: E0213 20:45:26.892574    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.892673 kubelet[3141]: W0213 20:45:26.892586    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.892673 kubelet[3141]: E0213 20:45:26.892597    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.893020 kubelet[3141]: E0213 20:45:26.892882    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.893020 kubelet[3141]: W0213 20:45:26.892892    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.893020 kubelet[3141]: E0213 20:45:26.892903    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.893343 kubelet[3141]: E0213 20:45:26.893312    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.893508 kubelet[3141]: W0213 20:45:26.893434    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.893508 kubelet[3141]: E0213 20:45:26.893453    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.893838 kubelet[3141]: E0213 20:45:26.893789    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.893838 kubelet[3141]: W0213 20:45:26.893801    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.893838 kubelet[3141]: E0213 20:45:26.893812    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.895074 kubelet[3141]: E0213 20:45:26.894210    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.895074 kubelet[3141]: W0213 20:45:26.894224    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.895405 kubelet[3141]: E0213 20:45:26.894235    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.895648 kubelet[3141]: E0213 20:45:26.895528    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.895648 kubelet[3141]: W0213 20:45:26.895540    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.895648 kubelet[3141]: E0213 20:45:26.895552    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.896001 kubelet[3141]: E0213 20:45:26.895863    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.896001 kubelet[3141]: W0213 20:45:26.895878    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.896001 kubelet[3141]: E0213 20:45:26.895889    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.896218 kubelet[3141]: E0213 20:45:26.896119    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.896218 kubelet[3141]: W0213 20:45:26.896130    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.896218 kubelet[3141]: E0213 20:45:26.896139    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.896786 kubelet[3141]: E0213 20:45:26.896728    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.896786 kubelet[3141]: W0213 20:45:26.896743    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.896786 kubelet[3141]: E0213 20:45:26.896754    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.897644 kubelet[3141]: E0213 20:45:26.897492    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.897644 kubelet[3141]: W0213 20:45:26.897512    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.897644 kubelet[3141]: E0213 20:45:26.897526    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.898394 kubelet[3141]: E0213 20:45:26.897845    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.898394 kubelet[3141]: W0213 20:45:26.897859    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.898394 kubelet[3141]: E0213 20:45:26.897869    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.898745 kubelet[3141]: E0213 20:45:26.898731    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.898943 kubelet[3141]: W0213 20:45:26.898774    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.898943 kubelet[3141]: E0213 20:45:26.898789    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.899062 kubelet[3141]: E0213 20:45:26.899050    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.899254 kubelet[3141]: W0213 20:45:26.899239    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.899318 kubelet[3141]: E0213 20:45:26.899307    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.900324 kubelet[3141]: E0213 20:45:26.900005    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.900324 kubelet[3141]: W0213 20:45:26.900021    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.900324 kubelet[3141]: E0213 20:45:26.900033    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.900810 kubelet[3141]: E0213 20:45:26.900631    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.900810 kubelet[3141]: W0213 20:45:26.900646    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.900810 kubelet[3141]: E0213 20:45:26.900658    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.901034 kubelet[3141]: E0213 20:45:26.900935    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.901034 kubelet[3141]: W0213 20:45:26.900953    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.901034 kubelet[3141]: E0213 20:45:26.900974    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.901504 kubelet[3141]: E0213 20:45:26.901219    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.901504 kubelet[3141]: W0213 20:45:26.901233    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.901504 kubelet[3141]: E0213 20:45:26.901248    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.901504 kubelet[3141]: E0213 20:45:26.901391    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.901504 kubelet[3141]: W0213 20:45:26.901399    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.901504 kubelet[3141]: E0213 20:45:26.901407    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.901883 kubelet[3141]: E0213 20:45:26.901864    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.901883 kubelet[3141]: W0213 20:45:26.901878    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.901969 kubelet[3141]: E0213 20:45:26.901895    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.903799 kubelet[3141]: E0213 20:45:26.903568    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.903799 kubelet[3141]: W0213 20:45:26.903589    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.903799 kubelet[3141]: E0213 20:45:26.903647    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.904160 kubelet[3141]: E0213 20:45:26.904078    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.904160 kubelet[3141]: W0213 20:45:26.904093    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.904233 kubelet[3141]: E0213 20:45:26.904155    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.904457 kubelet[3141]: E0213 20:45:26.904409    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.904457 kubelet[3141]: W0213 20:45:26.904421    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.904705 kubelet[3141]: E0213 20:45:26.904548    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.905718 kubelet[3141]: E0213 20:45:26.905085    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.905939 kubelet[3141]: W0213 20:45:26.905813    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.905939 kubelet[3141]: E0213 20:45:26.905852    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.906452 kubelet[3141]: E0213 20:45:26.906291    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.906452 kubelet[3141]: W0213 20:45:26.906308    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.906614 kubelet[3141]: E0213 20:45:26.906580    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.906819 kubelet[3141]: E0213 20:45:26.906689    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.906819 kubelet[3141]: W0213 20:45:26.906699    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.906819 kubelet[3141]: E0213 20:45:26.906724    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.907201 kubelet[3141]: E0213 20:45:26.907182    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.907493 kubelet[3141]: W0213 20:45:26.907384    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.907493 kubelet[3141]: E0213 20:45:26.907418    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.908371 kubelet[3141]: E0213 20:45:26.908325    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.908459 kubelet[3141]: W0213 20:45:26.908362    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.908459 kubelet[3141]: E0213 20:45:26.908400    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.908767 kubelet[3141]: E0213 20:45:26.908596    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.908767 kubelet[3141]: W0213 20:45:26.908604    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.908767 kubelet[3141]: E0213 20:45:26.908614    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.909370 kubelet[3141]: E0213 20:45:26.909214    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.909370 kubelet[3141]: W0213 20:45:26.909232    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.909370 kubelet[3141]: E0213 20:45:26.909247    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.909505 kubelet[3141]: E0213 20:45:26.909452    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.909505 kubelet[3141]: W0213 20:45:26.909463    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.909505 kubelet[3141]: E0213 20:45:26.909481    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.909970 kubelet[3141]: E0213 20:45:26.909925    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.910269 kubelet[3141]: W0213 20:45:26.910177    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.910269 kubelet[3141]: E0213 20:45:26.910206    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:26.910917 kubelet[3141]: E0213 20:45:26.910887    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:26.910917 kubelet[3141]: W0213 20:45:26.910907    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:26.910917 kubelet[3141]: E0213 20:45:26.910923    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.741578 kubelet[3141]: E0213 20:45:27.741527    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9m58" podUID="22e5305d-78c3-49cd-bb6d-90df1e2b864e"
Feb 13 20:45:27.796180 kubelet[3141]: I0213 20:45:27.795999    3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 20:45:27.805066 kubelet[3141]: E0213 20:45:27.804973    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.805066 kubelet[3141]: W0213 20:45:27.804997    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.805066 kubelet[3141]: E0213 20:45:27.805018    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.805382 kubelet[3141]: E0213 20:45:27.805144    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.805382 kubelet[3141]: W0213 20:45:27.805151    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.805382 kubelet[3141]: E0213 20:45:27.805160    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.805382 kubelet[3141]: E0213 20:45:27.805274    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.805382 kubelet[3141]: W0213 20:45:27.805281    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.805382 kubelet[3141]: E0213 20:45:27.805288    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.805694 kubelet[3141]: E0213 20:45:27.805422    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.805694 kubelet[3141]: W0213 20:45:27.805430    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.805694 kubelet[3141]: E0213 20:45:27.805438    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.805694 kubelet[3141]: E0213 20:45:27.805567    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.805694 kubelet[3141]: W0213 20:45:27.805574    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.805694 kubelet[3141]: E0213 20:45:27.805581    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.805694 kubelet[3141]: E0213 20:45:27.805692    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.805925 kubelet[3141]: W0213 20:45:27.805699    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.805925 kubelet[3141]: E0213 20:45:27.805706    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.805925 kubelet[3141]: E0213 20:45:27.805815    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.805925 kubelet[3141]: W0213 20:45:27.805821    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.805925 kubelet[3141]: E0213 20:45:27.805828    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.806050 kubelet[3141]: E0213 20:45:27.805936    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.806050 kubelet[3141]: W0213 20:45:27.805942    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.806050 kubelet[3141]: E0213 20:45:27.805949    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.806197 kubelet[3141]: E0213 20:45:27.806076    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.806197 kubelet[3141]: W0213 20:45:27.806083    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.806197 kubelet[3141]: E0213 20:45:27.806091    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.806197 kubelet[3141]: E0213 20:45:27.806209    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.806197 kubelet[3141]: W0213 20:45:27.806215    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.806403 kubelet[3141]: E0213 20:45:27.806222    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.806403 kubelet[3141]: E0213 20:45:27.806350    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.806403 kubelet[3141]: W0213 20:45:27.806358    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.806403 kubelet[3141]: E0213 20:45:27.806365    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.806642 kubelet[3141]: E0213 20:45:27.806485    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.806642 kubelet[3141]: W0213 20:45:27.806491    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.806642 kubelet[3141]: E0213 20:45:27.806498    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.806642 kubelet[3141]: E0213 20:45:27.806615    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.806642 kubelet[3141]: W0213 20:45:27.806621    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.806642 kubelet[3141]: E0213 20:45:27.806628    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.806880 kubelet[3141]: E0213 20:45:27.806740    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.806880 kubelet[3141]: W0213 20:45:27.806747    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.806880 kubelet[3141]: E0213 20:45:27.806754    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.806880 kubelet[3141]: E0213 20:45:27.806861    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.806880 kubelet[3141]: W0213 20:45:27.806867    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.806880 kubelet[3141]: E0213 20:45:27.806873    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.810280 kubelet[3141]: E0213 20:45:27.810260    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.810512 kubelet[3141]: W0213 20:45:27.810374    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.810512 kubelet[3141]: E0213 20:45:27.810394    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.810667 kubelet[3141]: E0213 20:45:27.810654    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.810815 kubelet[3141]: W0213 20:45:27.810713    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.810815 kubelet[3141]: E0213 20:45:27.810738    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.810959 kubelet[3141]: E0213 20:45:27.810937    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.810959 kubelet[3141]: W0213 20:45:27.810957    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.811148 kubelet[3141]: E0213 20:45:27.810978    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.811198 kubelet[3141]: E0213 20:45:27.811191    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.811228 kubelet[3141]: W0213 20:45:27.811201    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.811228 kubelet[3141]: E0213 20:45:27.811219    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.811486 kubelet[3141]: E0213 20:45:27.811470    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.811666 kubelet[3141]: W0213 20:45:27.811549    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.811666 kubelet[3141]: E0213 20:45:27.811574    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.811798 kubelet[3141]: E0213 20:45:27.811785    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.811852 kubelet[3141]: W0213 20:45:27.811842    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.811960 kubelet[3141]: E0213 20:45:27.811948    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.812160 kubelet[3141]: E0213 20:45:27.812140    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.812160 kubelet[3141]: W0213 20:45:27.812157    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.812234 kubelet[3141]: E0213 20:45:27.812174    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.812429 kubelet[3141]: E0213 20:45:27.812322    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.812461 kubelet[3141]: W0213 20:45:27.812430    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.812530 kubelet[3141]: E0213 20:45:27.812513    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.812667 kubelet[3141]: E0213 20:45:27.812650    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.812667 kubelet[3141]: W0213 20:45:27.812664    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.812758 kubelet[3141]: E0213 20:45:27.812728    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.812823 kubelet[3141]: E0213 20:45:27.812808    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.812823 kubelet[3141]: W0213 20:45:27.812822    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.812873 kubelet[3141]: E0213 20:45:27.812838    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.813057 kubelet[3141]: E0213 20:45:27.813041    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.813057 kubelet[3141]: W0213 20:45:27.813054    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.813117 kubelet[3141]: E0213 20:45:27.813073    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.813478 kubelet[3141]: E0213 20:45:27.813457    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.813478 kubelet[3141]: W0213 20:45:27.813475    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.813549 kubelet[3141]: E0213 20:45:27.813493    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.813683 kubelet[3141]: E0213 20:45:27.813668    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.813683 kubelet[3141]: W0213 20:45:27.813681    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.813733 kubelet[3141]: E0213 20:45:27.813699    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.814095 kubelet[3141]: E0213 20:45:27.814076    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.814095 kubelet[3141]: W0213 20:45:27.814095    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.814164 kubelet[3141]: E0213 20:45:27.814111    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.814269 kubelet[3141]: E0213 20:45:27.814255    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.814269 kubelet[3141]: W0213 20:45:27.814267    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.814322 kubelet[3141]: E0213 20:45:27.814284    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.814595 kubelet[3141]: E0213 20:45:27.814579    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.814595 kubelet[3141]: W0213 20:45:27.814593    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.814651 kubelet[3141]: E0213 20:45:27.814612    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.814868 kubelet[3141]: E0213 20:45:27.814850    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.814868 kubelet[3141]: W0213 20:45:27.814864    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.814932 kubelet[3141]: E0213 20:45:27.814879    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:27.815052 kubelet[3141]: E0213 20:45:27.815038    3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 20:45:27.815052 kubelet[3141]: W0213 20:45:27.815050    3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 20:45:27.815106 kubelet[3141]: E0213 20:45:27.815058    3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 20:45:28.329842 containerd[1726]: time="2025-02-13T20:45:28.329765410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:28.334557 containerd[1726]: time="2025-02-13T20:45:28.334253616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811"
Feb 13 20:45:28.339762 containerd[1726]: time="2025-02-13T20:45:28.339675263Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:28.351184 containerd[1726]: time="2025-02-13T20:45:28.351117638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:28.352188 containerd[1726]: time="2025-02-13T20:45:28.351801479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.833906674s"
Feb 13 20:45:28.352188 containerd[1726]: time="2025-02-13T20:45:28.351840559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\""
Feb 13 20:45:28.354615 containerd[1726]: time="2025-02-13T20:45:28.354573322Z" level=info msg="CreateContainer within sandbox \"bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Feb 13 20:45:28.406972 containerd[1726]: time="2025-02-13T20:45:28.406902551Z" level=info msg="CreateContainer within sandbox \"bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ba614943823eeeec986d837ea8fe09b09edaa31a76ffa2367eaa08c4fa54c5d2\""
Feb 13 20:45:28.407911 containerd[1726]: time="2025-02-13T20:45:28.407726632Z" level=info msg="StartContainer for \"ba614943823eeeec986d837ea8fe09b09edaa31a76ffa2367eaa08c4fa54c5d2\""
Feb 13 20:45:28.442549 systemd[1]: Started cri-containerd-ba614943823eeeec986d837ea8fe09b09edaa31a76ffa2367eaa08c4fa54c5d2.scope - libcontainer container ba614943823eeeec986d837ea8fe09b09edaa31a76ffa2367eaa08c4fa54c5d2.
Feb 13 20:45:28.471162 containerd[1726]: time="2025-02-13T20:45:28.471007234Z" level=info msg="StartContainer for \"ba614943823eeeec986d837ea8fe09b09edaa31a76ffa2367eaa08c4fa54c5d2\" returns successfully"
Feb 13 20:45:28.483663 systemd[1]: cri-containerd-ba614943823eeeec986d837ea8fe09b09edaa31a76ffa2367eaa08c4fa54c5d2.scope: Deactivated successfully.
Feb 13 20:45:28.505204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba614943823eeeec986d837ea8fe09b09edaa31a76ffa2367eaa08c4fa54c5d2-rootfs.mount: Deactivated successfully.
Feb 13 20:45:28.819849 kubelet[3141]: I0213 20:45:28.819676    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bbb679784-4mvfr" podStartSLOduration=3.482589475 podStartE2EDuration="5.81966017s" podCreationTimestamp="2025-02-13 20:45:23 +0000 UTC" firstStartedPulling="2025-02-13 20:45:24.179942069 +0000 UTC m=+16.687759174" lastFinishedPulling="2025-02-13 20:45:26.517012724 +0000 UTC m=+19.024829869" observedRunningTime="2025-02-13 20:45:26.830939694 +0000 UTC m=+19.338756839" watchObservedRunningTime="2025-02-13 20:45:28.81966017 +0000 UTC m=+21.327477315"
Feb 13 20:45:28.953202 kubelet[3141]: I0213 20:45:28.952989    3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 20:45:29.474136 containerd[1726]: time="2025-02-13T20:45:29.474042464Z" level=info msg="shim disconnected" id=ba614943823eeeec986d837ea8fe09b09edaa31a76ffa2367eaa08c4fa54c5d2 namespace=k8s.io
Feb 13 20:45:29.474136 containerd[1726]: time="2025-02-13T20:45:29.474101024Z" level=warning msg="cleaning up after shim disconnected" id=ba614943823eeeec986d837ea8fe09b09edaa31a76ffa2367eaa08c4fa54c5d2 namespace=k8s.io
Feb 13 20:45:29.474136 containerd[1726]: time="2025-02-13T20:45:29.474109504Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 20:45:29.741139 kubelet[3141]: E0213 20:45:29.740994    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9m58" podUID="22e5305d-78c3-49cd-bb6d-90df1e2b864e"
Feb 13 20:45:29.804187 containerd[1726]: time="2025-02-13T20:45:29.804078255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Feb 13 20:45:31.740865 kubelet[3141]: E0213 20:45:31.740698    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9m58" podUID="22e5305d-78c3-49cd-bb6d-90df1e2b864e"
Feb 13 20:45:33.740497 kubelet[3141]: E0213 20:45:33.740448    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z9m58" podUID="22e5305d-78c3-49cd-bb6d-90df1e2b864e"
Feb 13 20:45:34.052034 containerd[1726]: time="2025-02-13T20:45:34.051640473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:34.054947 containerd[1726]: time="2025-02-13T20:45:34.054901157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123"
Feb 13 20:45:34.062370 containerd[1726]: time="2025-02-13T20:45:34.061819366Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:34.070472 containerd[1726]: time="2025-02-13T20:45:34.070431056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:34.071062 containerd[1726]: time="2025-02-13T20:45:34.071018937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.266835082s"
Feb 13 20:45:34.071062 containerd[1726]: time="2025-02-13T20:45:34.071058137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\""
Feb 13 20:45:34.074814 containerd[1726]: time="2025-02-13T20:45:34.074123541Z" level=info msg="CreateContainer within sandbox \"bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 13 20:45:34.120295 containerd[1726]: time="2025-02-13T20:45:34.120240798Z" level=info msg="CreateContainer within sandbox \"bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97694495203cf3cb99bbbaeb83b99bacf683bfaae70cb751a8edf47f8aabef8d\""
Feb 13 20:45:34.121381 containerd[1726]: time="2025-02-13T20:45:34.121211519Z" level=info msg="StartContainer for \"97694495203cf3cb99bbbaeb83b99bacf683bfaae70cb751a8edf47f8aabef8d\""
Feb 13 20:45:34.154641 systemd[1]: Started cri-containerd-97694495203cf3cb99bbbaeb83b99bacf683bfaae70cb751a8edf47f8aabef8d.scope - libcontainer container 97694495203cf3cb99bbbaeb83b99bacf683bfaae70cb751a8edf47f8aabef8d.
Feb 13 20:45:34.184444 containerd[1726]: time="2025-02-13T20:45:34.184386798Z" level=info msg="StartContainer for \"97694495203cf3cb99bbbaeb83b99bacf683bfaae70cb751a8edf47f8aabef8d\" returns successfully"
Feb 13 20:45:35.645022 containerd[1726]: time="2025-02-13T20:45:35.644973809Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 20:45:35.647397 systemd[1]: cri-containerd-97694495203cf3cb99bbbaeb83b99bacf683bfaae70cb751a8edf47f8aabef8d.scope: Deactivated successfully.
Feb 13 20:45:35.675234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97694495203cf3cb99bbbaeb83b99bacf683bfaae70cb751a8edf47f8aabef8d-rootfs.mount: Deactivated successfully.
Feb 13 20:45:35.704384 kubelet[3141]: I0213 20:45:35.704269    3141 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Feb 13 20:45:35.957655 kubelet[3141]: I0213 20:45:35.769697    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01611f6f-431c-448d-b299-3b089d806504-config-volume\") pod \"coredns-6f6b679f8f-zzzs7\" (UID: \"01611f6f-431c-448d-b299-3b089d806504\") " pod="kube-system/coredns-6f6b679f8f-zzzs7"
Feb 13 20:45:35.957655 kubelet[3141]: I0213 20:45:35.769761    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8zm\" (UniqueName: \"kubernetes.io/projected/769f719e-512b-4b14-b16e-ae7dc6a2ea08-kube-api-access-dq8zm\") pod \"calico-kube-controllers-65c76945-fvdq7\" (UID: \"769f719e-512b-4b14-b16e-ae7dc6a2ea08\") " pod="calico-system/calico-kube-controllers-65c76945-fvdq7"
Feb 13 20:45:35.957655 kubelet[3141]: I0213 20:45:35.769801    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjbk\" (UniqueName: \"kubernetes.io/projected/94141c30-c43e-4a2f-8964-6298496fe9ed-kube-api-access-qvjbk\") pod \"calico-apiserver-7bc65467b6-c2v2b\" (UID: \"94141c30-c43e-4a2f-8964-6298496fe9ed\") " pod="calico-apiserver/calico-apiserver-7bc65467b6-c2v2b"
Feb 13 20:45:35.957655 kubelet[3141]: I0213 20:45:35.769833    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11cb515-89d7-469b-9b5c-347d66dd86cd-config-volume\") pod \"coredns-6f6b679f8f-7t4w9\" (UID: \"a11cb515-89d7-469b-9b5c-347d66dd86cd\") " pod="kube-system/coredns-6f6b679f8f-7t4w9"
Feb 13 20:45:35.957655 kubelet[3141]: I0213 20:45:35.769849    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/693eb617-fd33-495b-bd76-299eb1a516ac-calico-apiserver-certs\") pod \"calico-apiserver-7bc65467b6-ssgrs\" (UID: \"693eb617-fd33-495b-bd76-299eb1a516ac\") " pod="calico-apiserver/calico-apiserver-7bc65467b6-ssgrs"
Feb 13 20:45:35.748735 systemd[1]: Created slice kubepods-burstable-pod01611f6f_431c_448d_b299_3b089d806504.slice - libcontainer container kubepods-burstable-pod01611f6f_431c_448d_b299_3b089d806504.slice.
Feb 13 20:45:35.957943 kubelet[3141]: I0213 20:45:35.769865    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/769f719e-512b-4b14-b16e-ae7dc6a2ea08-tigera-ca-bundle\") pod \"calico-kube-controllers-65c76945-fvdq7\" (UID: \"769f719e-512b-4b14-b16e-ae7dc6a2ea08\") " pod="calico-system/calico-kube-controllers-65c76945-fvdq7"
Feb 13 20:45:35.957943 kubelet[3141]: I0213 20:45:35.769880    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqz59\" (UniqueName: \"kubernetes.io/projected/693eb617-fd33-495b-bd76-299eb1a516ac-kube-api-access-bqz59\") pod \"calico-apiserver-7bc65467b6-ssgrs\" (UID: \"693eb617-fd33-495b-bd76-299eb1a516ac\") " pod="calico-apiserver/calico-apiserver-7bc65467b6-ssgrs"
Feb 13 20:45:35.957943 kubelet[3141]: I0213 20:45:35.770014    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prnpw\" (UniqueName: \"kubernetes.io/projected/01611f6f-431c-448d-b299-3b089d806504-kube-api-access-prnpw\") pod \"coredns-6f6b679f8f-zzzs7\" (UID: \"01611f6f-431c-448d-b299-3b089d806504\") " pod="kube-system/coredns-6f6b679f8f-zzzs7"
Feb 13 20:45:35.957943 kubelet[3141]: I0213 20:45:35.770035    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94141c30-c43e-4a2f-8964-6298496fe9ed-calico-apiserver-certs\") pod \"calico-apiserver-7bc65467b6-c2v2b\" (UID: \"94141c30-c43e-4a2f-8964-6298496fe9ed\") " pod="calico-apiserver/calico-apiserver-7bc65467b6-c2v2b"
Feb 13 20:45:35.957943 kubelet[3141]: I0213 20:45:35.770053    3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz4xj\" (UniqueName: \"kubernetes.io/projected/a11cb515-89d7-469b-9b5c-347d66dd86cd-kube-api-access-rz4xj\") pod \"coredns-6f6b679f8f-7t4w9\" (UID: \"a11cb515-89d7-469b-9b5c-347d66dd86cd\") " pod="kube-system/coredns-6f6b679f8f-7t4w9"
Feb 13 20:45:35.761450 systemd[1]: Created slice kubepods-besteffort-pod22e5305d_78c3_49cd_bb6d_90df1e2b864e.slice - libcontainer container kubepods-besteffort-pod22e5305d_78c3_49cd_bb6d_90df1e2b864e.slice.
Feb 13 20:45:35.775987 systemd[1]: Created slice kubepods-besteffort-pod769f719e_512b_4b14_b16e_ae7dc6a2ea08.slice - libcontainer container kubepods-besteffort-pod769f719e_512b_4b14_b16e_ae7dc6a2ea08.slice.
Feb 13 20:45:35.786263 systemd[1]: Created slice kubepods-burstable-poda11cb515_89d7_469b_9b5c_347d66dd86cd.slice - libcontainer container kubepods-burstable-poda11cb515_89d7_469b_9b5c_347d66dd86cd.slice.
Feb 13 20:45:35.798130 systemd[1]: Created slice kubepods-besteffort-pod94141c30_c43e_4a2f_8964_6298496fe9ed.slice - libcontainer container kubepods-besteffort-pod94141c30_c43e_4a2f_8964_6298496fe9ed.slice.
Feb 13 20:45:35.803194 systemd[1]: Created slice kubepods-besteffort-pod693eb617_fd33_495b_bd76_299eb1a516ac.slice - libcontainer container kubepods-besteffort-pod693eb617_fd33_495b_bd76_299eb1a516ac.slice.
Feb 13 20:45:36.096137 containerd[1726]: time="2025-02-13T20:45:36.094543606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9m58,Uid:22e5305d-78c3-49cd-bb6d-90df1e2b864e,Namespace:calico-system,Attempt:0,}"
Feb 13 20:45:36.395946 containerd[1726]: time="2025-02-13T20:45:36.395896420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7t4w9,Uid:a11cb515-89d7-469b-9b5c-347d66dd86cd,Namespace:kube-system,Attempt:0,}"
Feb 13 20:45:36.396125 containerd[1726]: time="2025-02-13T20:45:36.395897180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zzzs7,Uid:01611f6f-431c-448d-b299-3b089d806504,Namespace:kube-system,Attempt:0,}"
Feb 13 20:45:36.396861 containerd[1726]: time="2025-02-13T20:45:36.396819141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc65467b6-c2v2b,Uid:94141c30-c43e-4a2f-8964-6298496fe9ed,Namespace:calico-apiserver,Attempt:0,}"
Feb 13 20:45:36.416915 containerd[1726]: time="2025-02-13T20:45:36.416868766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c76945-fvdq7,Uid:769f719e-512b-4b14-b16e-ae7dc6a2ea08,Namespace:calico-system,Attempt:0,}"
Feb 13 20:45:36.421690 containerd[1726]: time="2025-02-13T20:45:36.421601492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc65467b6-ssgrs,Uid:693eb617-fd33-495b-bd76-299eb1a516ac,Namespace:calico-apiserver,Attempt:0,}"
Feb 13 20:45:36.619053 containerd[1726]: time="2025-02-13T20:45:36.618988616Z" level=info msg="shim disconnected" id=97694495203cf3cb99bbbaeb83b99bacf683bfaae70cb751a8edf47f8aabef8d namespace=k8s.io
Feb 13 20:45:36.619053 containerd[1726]: time="2025-02-13T20:45:36.619044696Z" level=warning msg="cleaning up after shim disconnected" id=97694495203cf3cb99bbbaeb83b99bacf683bfaae70cb751a8edf47f8aabef8d namespace=k8s.io
Feb 13 20:45:36.619053 containerd[1726]: time="2025-02-13T20:45:36.619053376Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 20:45:36.826093 containerd[1726]: time="2025-02-13T20:45:36.826047433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Feb 13 20:45:36.975637 containerd[1726]: time="2025-02-13T20:45:36.975580858Z" level=error msg="Failed to destroy network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:36.977613 containerd[1726]: time="2025-02-13T20:45:36.977560861Z" level=error msg="encountered an error cleaning up failed sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:36.977613 containerd[1726]: time="2025-02-13T20:45:36.977631301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c76945-fvdq7,Uid:769f719e-512b-4b14-b16e-ae7dc6a2ea08,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:36.978642 kubelet[3141]: E0213 20:45:36.978405    3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:36.978642 kubelet[3141]: E0213 20:45:36.978479    3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c76945-fvdq7"
Feb 13 20:45:36.978642 kubelet[3141]: E0213 20:45:36.978499    3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c76945-fvdq7"
Feb 13 20:45:36.979445 kubelet[3141]: E0213 20:45:36.978540    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65c76945-fvdq7_calico-system(769f719e-512b-4b14-b16e-ae7dc6a2ea08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65c76945-fvdq7_calico-system(769f719e-512b-4b14-b16e-ae7dc6a2ea08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65c76945-fvdq7" podUID="769f719e-512b-4b14-b16e-ae7dc6a2ea08"
Feb 13 20:45:37.009505 containerd[1726]: time="2025-02-13T20:45:37.009368380Z" level=error msg="Failed to destroy network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.009729 containerd[1726]: time="2025-02-13T20:45:37.009687501Z" level=error msg="encountered an error cleaning up failed sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.009773 containerd[1726]: time="2025-02-13T20:45:37.009745141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7t4w9,Uid:a11cb515-89d7-469b-9b5c-347d66dd86cd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.009983 kubelet[3141]: E0213 20:45:37.009944    3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.010068 kubelet[3141]: E0213 20:45:37.010007    3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7t4w9"
Feb 13 20:45:37.010068 kubelet[3141]: E0213 20:45:37.010026    3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7t4w9"
Feb 13 20:45:37.010266 kubelet[3141]: E0213 20:45:37.010107    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7t4w9_kube-system(a11cb515-89d7-469b-9b5c-347d66dd86cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7t4w9_kube-system(a11cb515-89d7-469b-9b5c-347d66dd86cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7t4w9" podUID="a11cb515-89d7-469b-9b5c-347d66dd86cd"
Feb 13 20:45:37.022317 containerd[1726]: time="2025-02-13T20:45:37.020191314Z" level=error msg="Failed to destroy network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.023758 containerd[1726]: time="2025-02-13T20:45:37.023601318Z" level=error msg="encountered an error cleaning up failed sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.023758 containerd[1726]: time="2025-02-13T20:45:37.023718158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9m58,Uid:22e5305d-78c3-49cd-bb6d-90df1e2b864e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.024011 containerd[1726]: time="2025-02-13T20:45:37.023632478Z" level=error msg="Failed to destroy network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.024746 kubelet[3141]: E0213 20:45:37.024604    3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.024746 kubelet[3141]: E0213 20:45:37.024672    3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z9m58"
Feb 13 20:45:37.024746 kubelet[3141]: E0213 20:45:37.024693    3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z9m58"
Feb 13 20:45:37.025117 kubelet[3141]: E0213 20:45:37.024733    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z9m58_calico-system(22e5305d-78c3-49cd-bb6d-90df1e2b864e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z9m58_calico-system(22e5305d-78c3-49cd-bb6d-90df1e2b864e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z9m58" podUID="22e5305d-78c3-49cd-bb6d-90df1e2b864e"
Feb 13 20:45:37.025907 containerd[1726]: time="2025-02-13T20:45:37.025841841Z" level=error msg="Failed to destroy network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.026322 containerd[1726]: time="2025-02-13T20:45:37.026186401Z" level=error msg="encountered an error cleaning up failed sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.026322 containerd[1726]: time="2025-02-13T20:45:37.026277281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc65467b6-c2v2b,Uid:94141c30-c43e-4a2f-8964-6298496fe9ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.027015 containerd[1726]: time="2025-02-13T20:45:37.026678922Z" level=error msg="encountered an error cleaning up failed sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.027015 containerd[1726]: time="2025-02-13T20:45:37.026738522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zzzs7,Uid:01611f6f-431c-448d-b299-3b089d806504,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.027120 kubelet[3141]: E0213 20:45:37.026827    3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.027120 kubelet[3141]: E0213 20:45:37.026893    3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc65467b6-c2v2b"
Feb 13 20:45:37.027120 kubelet[3141]: E0213 20:45:37.026915    3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc65467b6-c2v2b"
Feb 13 20:45:37.027221 kubelet[3141]: E0213 20:45:37.026962    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bc65467b6-c2v2b_calico-apiserver(94141c30-c43e-4a2f-8964-6298496fe9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bc65467b6-c2v2b_calico-apiserver(94141c30-c43e-4a2f-8964-6298496fe9ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc65467b6-c2v2b" podUID="94141c30-c43e-4a2f-8964-6298496fe9ed"
Feb 13 20:45:37.027647 kubelet[3141]: E0213 20:45:37.027477    3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.027647 kubelet[3141]: E0213 20:45:37.027532    3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zzzs7"
Feb 13 20:45:37.027647 kubelet[3141]: E0213 20:45:37.027549    3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zzzs7"
Feb 13 20:45:37.027792 kubelet[3141]: E0213 20:45:37.027606    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-zzzs7_kube-system(01611f6f-431c-448d-b299-3b089d806504)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-zzzs7_kube-system(01611f6f-431c-448d-b299-3b089d806504)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zzzs7" podUID="01611f6f-431c-448d-b299-3b089d806504"
Feb 13 20:45:37.030128 containerd[1726]: time="2025-02-13T20:45:37.029920806Z" level=error msg="Failed to destroy network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.030499 containerd[1726]: time="2025-02-13T20:45:37.030401566Z" level=error msg="encountered an error cleaning up failed sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.030649 containerd[1726]: time="2025-02-13T20:45:37.030479286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc65467b6-ssgrs,Uid:693eb617-fd33-495b-bd76-299eb1a516ac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.030933 kubelet[3141]: E0213 20:45:37.030877    3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.031031 kubelet[3141]: E0213 20:45:37.030942    3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc65467b6-ssgrs"
Feb 13 20:45:37.031031 kubelet[3141]: E0213 20:45:37.030962    3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc65467b6-ssgrs"
Feb 13 20:45:37.031031 kubelet[3141]: E0213 20:45:37.031010    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bc65467b6-ssgrs_calico-apiserver(693eb617-fd33-495b-bd76-299eb1a516ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bc65467b6-ssgrs_calico-apiserver(693eb617-fd33-495b-bd76-299eb1a516ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc65467b6-ssgrs" podUID="693eb617-fd33-495b-bd76-299eb1a516ac"
Feb 13 20:45:37.675804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b-shm.mount: Deactivated successfully.
Feb 13 20:45:37.675888 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1-shm.mount: Deactivated successfully.
Feb 13 20:45:37.675938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f-shm.mount: Deactivated successfully.
Feb 13 20:45:37.829109 kubelet[3141]: I0213 20:45:37.827462    3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:45:37.829303 containerd[1726]: time="2025-02-13T20:45:37.828565916Z" level=info msg="StopPodSandbox for \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\""
Feb 13 20:45:37.829303 containerd[1726]: time="2025-02-13T20:45:37.828727996Z" level=info msg="Ensure that sandbox e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1 in task-service has been cleanup successfully"
Feb 13 20:45:37.830568 kubelet[3141]: I0213 20:45:37.830528    3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:45:37.831450 containerd[1726]: time="2025-02-13T20:45:37.831289199Z" level=info msg="StopPodSandbox for \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\""
Feb 13 20:45:37.831963 kubelet[3141]: I0213 20:45:37.831925    3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:45:37.832918 containerd[1726]: time="2025-02-13T20:45:37.832446801Z" level=info msg="Ensure that sandbox ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b in task-service has been cleanup successfully"
Feb 13 20:45:37.833639 containerd[1726]: time="2025-02-13T20:45:37.833610282Z" level=info msg="StopPodSandbox for \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\""
Feb 13 20:45:37.834157 containerd[1726]: time="2025-02-13T20:45:37.834129763Z" level=info msg="Ensure that sandbox 99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423 in task-service has been cleanup successfully"
Feb 13 20:45:37.840375 kubelet[3141]: I0213 20:45:37.840237    3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:45:37.840824 containerd[1726]: time="2025-02-13T20:45:37.840786291Z" level=info msg="StopPodSandbox for \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\""
Feb 13 20:45:37.841279 containerd[1726]: time="2025-02-13T20:45:37.841001651Z" level=info msg="Ensure that sandbox d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5 in task-service has been cleanup successfully"
Feb 13 20:45:37.847986 kubelet[3141]: I0213 20:45:37.847236    3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:45:37.851051 containerd[1726]: time="2025-02-13T20:45:37.850956224Z" level=info msg="StopPodSandbox for \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\""
Feb 13 20:45:37.851186 containerd[1726]: time="2025-02-13T20:45:37.851167944Z" level=info msg="Ensure that sandbox 874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c in task-service has been cleanup successfully"
Feb 13 20:45:37.852581 kubelet[3141]: I0213 20:45:37.852488    3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:45:37.854573 containerd[1726]: time="2025-02-13T20:45:37.854436108Z" level=info msg="StopPodSandbox for \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\""
Feb 13 20:45:37.858753 containerd[1726]: time="2025-02-13T20:45:37.857023831Z" level=info msg="Ensure that sandbox 5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f in task-service has been cleanup successfully"
Feb 13 20:45:37.915055 containerd[1726]: time="2025-02-13T20:45:37.914994463Z" level=error msg="StopPodSandbox for \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\" failed" error="failed to destroy network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.915716 kubelet[3141]: E0213 20:45:37.915570    3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:45:37.916388 kubelet[3141]: E0213 20:45:37.915745    3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"}
Feb 13 20:45:37.916388 kubelet[3141]: E0213 20:45:37.915817    3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"693eb617-fd33-495b-bd76-299eb1a516ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 20:45:37.916388 kubelet[3141]: E0213 20:45:37.915841    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"693eb617-fd33-495b-bd76-299eb1a516ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc65467b6-ssgrs" podUID="693eb617-fd33-495b-bd76-299eb1a516ac"
Feb 13 20:45:37.916540 containerd[1726]: time="2025-02-13T20:45:37.916159385Z" level=error msg="StopPodSandbox for \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\" failed" error="failed to destroy network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.918652 kubelet[3141]: E0213 20:45:37.918607    3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:45:37.918761 kubelet[3141]: E0213 20:45:37.918662    3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"}
Feb 13 20:45:37.918761 kubelet[3141]: E0213 20:45:37.918696    3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22e5305d-78c3-49cd-bb6d-90df1e2b864e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 20:45:37.918761 kubelet[3141]: E0213 20:45:37.918716    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22e5305d-78c3-49cd-bb6d-90df1e2b864e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z9m58" podUID="22e5305d-78c3-49cd-bb6d-90df1e2b864e"
Feb 13 20:45:37.925920 containerd[1726]: time="2025-02-13T20:45:37.925861557Z" level=error msg="StopPodSandbox for \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\" failed" error="failed to destroy network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.926162 kubelet[3141]: E0213 20:45:37.926081    3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:45:37.926162 kubelet[3141]: E0213 20:45:37.926131    3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"}
Feb 13 20:45:37.926256 kubelet[3141]: E0213 20:45:37.926168    3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94141c30-c43e-4a2f-8964-6298496fe9ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 20:45:37.926256 kubelet[3141]: E0213 20:45:37.926188    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94141c30-c43e-4a2f-8964-6298496fe9ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc65467b6-c2v2b" podUID="94141c30-c43e-4a2f-8964-6298496fe9ed"
Feb 13 20:45:37.933666 containerd[1726]: time="2025-02-13T20:45:37.933481686Z" level=error msg="StopPodSandbox for \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\" failed" error="failed to destroy network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.934738 kubelet[3141]: E0213 20:45:37.933815    3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:45:37.934738 kubelet[3141]: E0213 20:45:37.934038    3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"}
Feb 13 20:45:37.934738 kubelet[3141]: E0213 20:45:37.934089    3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a11cb515-89d7-469b-9b5c-347d66dd86cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 20:45:37.934738 kubelet[3141]: E0213 20:45:37.934130    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a11cb515-89d7-469b-9b5c-347d66dd86cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7t4w9" podUID="a11cb515-89d7-469b-9b5c-347d66dd86cd"
Feb 13 20:45:37.938168 containerd[1726]: time="2025-02-13T20:45:37.938125252Z" level=error msg="StopPodSandbox for \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\" failed" error="failed to destroy network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.938550 kubelet[3141]: E0213 20:45:37.938507    3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:45:37.938634 kubelet[3141]: E0213 20:45:37.938564    3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"}
Feb 13 20:45:37.938634 kubelet[3141]: E0213 20:45:37.938597    3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01611f6f-431c-448d-b299-3b089d806504\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 20:45:37.938634 kubelet[3141]: E0213 20:45:37.938616    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01611f6f-431c-448d-b299-3b089d806504\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zzzs7" podUID="01611f6f-431c-448d-b299-3b089d806504"
Feb 13 20:45:37.944078 containerd[1726]: time="2025-02-13T20:45:37.943921699Z" level=error msg="StopPodSandbox for \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\" failed" error="failed to destroy network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 20:45:37.944395 kubelet[3141]: E0213 20:45:37.944325    3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:45:37.944481 kubelet[3141]: E0213 20:45:37.944410    3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"}
Feb 13 20:45:37.944481 kubelet[3141]: E0213 20:45:37.944444    3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"769f719e-512b-4b14-b16e-ae7dc6a2ea08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Feb 13 20:45:37.944481 kubelet[3141]: E0213 20:45:37.944465    3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"769f719e-512b-4b14-b16e-ae7dc6a2ea08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65c76945-fvdq7" podUID="769f719e-512b-4b14-b16e-ae7dc6a2ea08"
Feb 13 20:45:43.184361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997927154.mount: Deactivated successfully.
Feb 13 20:45:43.281707 containerd[1726]: time="2025-02-13T20:45:43.281649240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:43.284419 containerd[1726]: time="2025-02-13T20:45:43.284378924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762"
Feb 13 20:45:43.288673 containerd[1726]: time="2025-02-13T20:45:43.288615449Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:43.292965 containerd[1726]: time="2025-02-13T20:45:43.292928775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:43.293689 containerd[1726]: time="2025-02-13T20:45:43.293533296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.467438943s"
Feb 13 20:45:43.293689 containerd[1726]: time="2025-02-13T20:45:43.293573936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\""
Feb 13 20:45:43.302748 containerd[1726]: time="2025-02-13T20:45:43.302624948Z" level=info msg="CreateContainer within sandbox \"bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Feb 13 20:45:43.353236 containerd[1726]: time="2025-02-13T20:45:43.353140856Z" level=info msg="CreateContainer within sandbox \"bc41b0ab168f2bdeb61de92974e90f383d961a69c052add400954fd8a36534db\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"12e4c9bed157b9d4fbad4c12bfae51b314a0c0918824e706e54193da81dfdc5a\""
Feb 13 20:45:43.354097 containerd[1726]: time="2025-02-13T20:45:43.353943897Z" level=info msg="StartContainer for \"12e4c9bed157b9d4fbad4c12bfae51b314a0c0918824e706e54193da81dfdc5a\""
Feb 13 20:45:43.378603 systemd[1]: Started cri-containerd-12e4c9bed157b9d4fbad4c12bfae51b314a0c0918824e706e54193da81dfdc5a.scope - libcontainer container 12e4c9bed157b9d4fbad4c12bfae51b314a0c0918824e706e54193da81dfdc5a.
Feb 13 20:45:43.411009 containerd[1726]: time="2025-02-13T20:45:43.410958453Z" level=info msg="StartContainer for \"12e4c9bed157b9d4fbad4c12bfae51b314a0c0918824e706e54193da81dfdc5a\" returns successfully"
Feb 13 20:45:43.632360 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Feb 13 20:45:43.632486 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Feb 13 20:45:43.891288 kubelet[3141]: I0213 20:45:43.891138    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mmrw5" podStartSLOduration=2.755420908 podStartE2EDuration="20.891121894s" podCreationTimestamp="2025-02-13 20:45:23 +0000 UTC" firstStartedPulling="2025-02-13 20:45:25.158673551 +0000 UTC m=+17.666490696" lastFinishedPulling="2025-02-13 20:45:43.294374537 +0000 UTC m=+35.802191682" observedRunningTime="2025-02-13 20:45:43.890906814 +0000 UTC m=+36.398723959" watchObservedRunningTime="2025-02-13 20:45:43.891121894 +0000 UTC m=+36.398939039"
Feb 13 20:45:45.259372 kernel: bpftool[4399]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Feb 13 20:45:45.432537 systemd-networkd[1626]: vxlan.calico: Link UP
Feb 13 20:45:45.432552 systemd-networkd[1626]: vxlan.calico: Gained carrier
Feb 13 20:45:47.052606 systemd-networkd[1626]: vxlan.calico: Gained IPv6LL
Feb 13 20:45:48.743718 containerd[1726]: time="2025-02-13T20:45:48.742236736Z" level=info msg="StopPodSandbox for \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\""
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:48.801 [INFO][4490] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.004 [INFO][4490] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" iface="eth0" netns="/var/run/netns/cni-c1b1992f-81fe-4084-cf38-09bfa2c11612"
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.004 [INFO][4490] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" iface="eth0" netns="/var/run/netns/cni-c1b1992f-81fe-4084-cf38-09bfa2c11612"
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.004 [INFO][4490] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" iface="eth0" netns="/var/run/netns/cni-c1b1992f-81fe-4084-cf38-09bfa2c11612"
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.004 [INFO][4490] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.004 [INFO][4490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.026 [INFO][4497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" HandleID="k8s-pod-network.d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.026 [INFO][4497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.026 [INFO][4497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.034 [WARNING][4497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" HandleID="k8s-pod-network.d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.034 [INFO][4497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" HandleID="k8s-pod-network.d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.036 [INFO][4497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:49.040220 containerd[1726]: 2025-02-13 20:45:49.038 [INFO][4490] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:45:49.042656 containerd[1726]: time="2025-02-13T20:45:49.042609781Z" level=info msg="TearDown network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\" successfully"
Feb 13 20:45:49.042656 containerd[1726]: time="2025-02-13T20:45:49.042649621Z" level=info msg="StopPodSandbox for \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\" returns successfully"
Feb 13 20:45:49.043308 systemd[1]: run-netns-cni\x2dc1b1992f\x2d81fe\x2d4084\x2dcf38\x2d09bfa2c11612.mount: Deactivated successfully.
Feb 13 20:45:49.044415 containerd[1726]: time="2025-02-13T20:45:49.043568102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc65467b6-c2v2b,Uid:94141c30-c43e-4a2f-8964-6298496fe9ed,Namespace:calico-apiserver,Attempt:1,}"
Feb 13 20:45:49.260590 systemd-networkd[1626]: cali5dfc9c85e4e: Link UP
Feb 13 20:45:49.260804 systemd-networkd[1626]: cali5dfc9c85e4e: Gained carrier
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.188 [INFO][4504] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0 calico-apiserver-7bc65467b6- calico-apiserver  94141c30-c43e-4a2f-8964-6298496fe9ed 773 0 2025-02-13 20:45:23 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bc65467b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-4081.3.1-a-d3f644b76a  calico-apiserver-7bc65467b6-c2v2b eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5dfc9c85e4e  [] []}} ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-c2v2b" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.188 [INFO][4504] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-c2v2b" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.216 [INFO][4515] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" HandleID="k8s-pod-network.4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.227 [INFO][4515] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" HandleID="k8s-pod-network.4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ab060), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-d3f644b76a", "pod":"calico-apiserver-7bc65467b6-c2v2b", "timestamp":"2025-02-13 20:45:49.216963697 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d3f644b76a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.228 [INFO][4515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.228 [INFO][4515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.228 [INFO][4515] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d3f644b76a'
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.229 [INFO][4515] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.233 [INFO][4515] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.237 [INFO][4515] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.238 [INFO][4515] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.240 [INFO][4515] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.240 [INFO][4515] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.242 [INFO][4515] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.246 [INFO][4515] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.253 [INFO][4515] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.65/26] block=192.168.105.64/26 handle="k8s-pod-network.4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.253 [INFO][4515] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.65/26] handle="k8s-pod-network.4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.253 [INFO][4515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:49.278779 containerd[1726]: 2025-02-13 20:45:49.253 [INFO][4515] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.65/26] IPv6=[] ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" HandleID="k8s-pod-network.4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.279864 containerd[1726]: 2025-02-13 20:45:49.255 [INFO][4504] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-c2v2b" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0", GenerateName:"calico-apiserver-7bc65467b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"94141c30-c43e-4a2f-8964-6298496fe9ed", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc65467b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"", Pod:"calico-apiserver-7bc65467b6-c2v2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5dfc9c85e4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:49.279864 containerd[1726]: 2025-02-13 20:45:49.255 [INFO][4504] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.65/32] ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-c2v2b" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.279864 containerd[1726]: 2025-02-13 20:45:49.256 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5dfc9c85e4e ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-c2v2b" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.279864 containerd[1726]: 2025-02-13 20:45:49.260 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-c2v2b" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.279864 containerd[1726]: 2025-02-13 20:45:49.261 [INFO][4504] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-c2v2b" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0", GenerateName:"calico-apiserver-7bc65467b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"94141c30-c43e-4a2f-8964-6298496fe9ed", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc65467b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32", Pod:"calico-apiserver-7bc65467b6-c2v2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5dfc9c85e4e", MAC:"62:9f:9f:1f:c8:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:49.279864 containerd[1726]: 2025-02-13 20:45:49.275 [INFO][4504] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-c2v2b" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:45:49.304454 containerd[1726]: time="2025-02-13T20:45:49.304131774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:49.304849 containerd[1726]: time="2025-02-13T20:45:49.304651015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:49.304849 containerd[1726]: time="2025-02-13T20:45:49.304700615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:49.305097 containerd[1726]: time="2025-02-13T20:45:49.304924815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:49.330548 systemd[1]: Started cri-containerd-4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32.scope - libcontainer container 4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32.
Feb 13 20:45:49.362842 containerd[1726]: time="2025-02-13T20:45:49.362797373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc65467b6-c2v2b,Uid:94141c30-c43e-4a2f-8964-6298496fe9ed,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32\""
Feb 13 20:45:49.365845 containerd[1726]: time="2025-02-13T20:45:49.365814057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Feb 13 20:45:49.741574 containerd[1726]: time="2025-02-13T20:45:49.741299004Z" level=info msg="StopPodSandbox for \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\""
Feb 13 20:45:49.741926 containerd[1726]: time="2025-02-13T20:45:49.741663365Z" level=info msg="StopPodSandbox for \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\""
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.817 [INFO][4601] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.817 [INFO][4601] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" iface="eth0" netns="/var/run/netns/cni-ba3d4207-dedb-1339-397c-bd077ef4e822"
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.818 [INFO][4601] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" iface="eth0" netns="/var/run/netns/cni-ba3d4207-dedb-1339-397c-bd077ef4e822"
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.819 [INFO][4601] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" iface="eth0" netns="/var/run/netns/cni-ba3d4207-dedb-1339-397c-bd077ef4e822"
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.819 [INFO][4601] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.819 [INFO][4601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.848 [INFO][4612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" HandleID="k8s-pod-network.99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.848 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.848 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.859 [WARNING][4612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" HandleID="k8s-pod-network.99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.859 [INFO][4612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" HandleID="k8s-pod-network.99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.860 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:49.865286 containerd[1726]: 2025-02-13 20:45:49.862 [INFO][4601] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:45:49.867966 containerd[1726]: time="2025-02-13T20:45:49.865353372Z" level=info msg="TearDown network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\" successfully"
Feb 13 20:45:49.867966 containerd[1726]: time="2025-02-13T20:45:49.865384292Z" level=info msg="StopPodSandbox for \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\" returns successfully"
Feb 13 20:45:49.867966 containerd[1726]: time="2025-02-13T20:45:49.866561173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc65467b6-ssgrs,Uid:693eb617-fd33-495b-bd76-299eb1a516ac,Namespace:calico-apiserver,Attempt:1,}"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.823 [INFO][4600] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.824 [INFO][4600] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" iface="eth0" netns="/var/run/netns/cni-abe246f0-6565-3140-42b3-e2749b82706b"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.825 [INFO][4600] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" iface="eth0" netns="/var/run/netns/cni-abe246f0-6565-3140-42b3-e2749b82706b"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.825 [INFO][4600] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" iface="eth0" netns="/var/run/netns/cni-abe246f0-6565-3140-42b3-e2749b82706b"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.825 [INFO][4600] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.825 [INFO][4600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.853 [INFO][4616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" HandleID="k8s-pod-network.874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.854 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.860 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.872 [WARNING][4616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" HandleID="k8s-pod-network.874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.872 [INFO][4616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" HandleID="k8s-pod-network.874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.874 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:49.878941 containerd[1726]: 2025-02-13 20:45:49.876 [INFO][4600] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:45:49.879320 containerd[1726]: time="2025-02-13T20:45:49.879236111Z" level=info msg="TearDown network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\" successfully"
Feb 13 20:45:49.879320 containerd[1726]: time="2025-02-13T20:45:49.879260471Z" level=info msg="StopPodSandbox for \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\" returns successfully"
Feb 13 20:45:49.880096 containerd[1726]: time="2025-02-13T20:45:49.880058552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zzzs7,Uid:01611f6f-431c-448d-b299-3b089d806504,Namespace:kube-system,Attempt:1,}"
Feb 13 20:45:50.047063 systemd[1]: run-containerd-runc-k8s.io-4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32-runc.ZKpPTI.mount: Deactivated successfully.
Feb 13 20:45:50.047179 systemd[1]: run-netns-cni\x2dba3d4207\x2ddedb\x2d1339\x2d397c\x2dbd077ef4e822.mount: Deactivated successfully.
Feb 13 20:45:50.047230 systemd[1]: run-netns-cni\x2dabe246f0\x2d6565\x2d3140\x2d42b3\x2de2749b82706b.mount: Deactivated successfully.
Feb 13 20:45:50.056717 systemd-networkd[1626]: cali878deea835e: Link UP
Feb 13 20:45:50.059211 systemd-networkd[1626]: cali878deea835e: Gained carrier
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:49.969 [INFO][4624] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0 calico-apiserver-7bc65467b6- calico-apiserver  693eb617-fd33-495b-bd76-299eb1a516ac 781 0 2025-02-13 20:45:23 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bc65467b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  ci-4081.3.1-a-d3f644b76a  calico-apiserver-7bc65467b6-ssgrs eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali878deea835e  [] []}} ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-ssgrs" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:49.969 [INFO][4624] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-ssgrs" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:49.999 [INFO][4648] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" HandleID="k8s-pod-network.2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.011 [INFO][4648] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" HandleID="k8s-pod-network.2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-d3f644b76a", "pod":"calico-apiserver-7bc65467b6-ssgrs", "timestamp":"2025-02-13 20:45:49.999123032 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d3f644b76a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.011 [INFO][4648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.011 [INFO][4648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.011 [INFO][4648] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d3f644b76a'
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.013 [INFO][4648] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.018 [INFO][4648] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.022 [INFO][4648] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.025 [INFO][4648] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.027 [INFO][4648] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.027 [INFO][4648] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.029 [INFO][4648] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.039 [INFO][4648] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.050 [INFO][4648] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.66/26] block=192.168.105.64/26 handle="k8s-pod-network.2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.050 [INFO][4648] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.66/26] handle="k8s-pod-network.2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.050 [INFO][4648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:50.076924 containerd[1726]: 2025-02-13 20:45:50.050 [INFO][4648] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.66/26] IPv6=[] ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" HandleID="k8s-pod-network.2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:50.077497 containerd[1726]: 2025-02-13 20:45:50.054 [INFO][4624] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-ssgrs" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0", GenerateName:"calico-apiserver-7bc65467b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"693eb617-fd33-495b-bd76-299eb1a516ac", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc65467b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"", Pod:"calico-apiserver-7bc65467b6-ssgrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali878deea835e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:50.077497 containerd[1726]: 2025-02-13 20:45:50.054 [INFO][4624] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.66/32] ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-ssgrs" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:50.077497 containerd[1726]: 2025-02-13 20:45:50.054 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali878deea835e ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-ssgrs" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:50.077497 containerd[1726]: 2025-02-13 20:45:50.058 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-ssgrs" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:50.077497 containerd[1726]: 2025-02-13 20:45:50.059 [INFO][4624] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-ssgrs" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0", GenerateName:"calico-apiserver-7bc65467b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"693eb617-fd33-495b-bd76-299eb1a516ac", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc65467b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6", Pod:"calico-apiserver-7bc65467b6-ssgrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali878deea835e", MAC:"42:fc:d1:66:cb:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:50.077497 containerd[1726]: 2025-02-13 20:45:50.074 [INFO][4624] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6" Namespace="calico-apiserver" Pod="calico-apiserver-7bc65467b6-ssgrs" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:45:50.099865 containerd[1726]: time="2025-02-13T20:45:50.099752488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:50.099865 containerd[1726]: time="2025-02-13T20:45:50.099809688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:50.099865 containerd[1726]: time="2025-02-13T20:45:50.099823928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:50.100182 containerd[1726]: time="2025-02-13T20:45:50.099906128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:50.131123 systemd[1]: Started cri-containerd-2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6.scope - libcontainer container 2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6.
Feb 13 20:45:50.172074 systemd-networkd[1626]: cali5f8311cec0a: Link UP
Feb 13 20:45:50.173422 systemd-networkd[1626]: cali5f8311cec0a: Gained carrier
Feb 13 20:45:50.188466 containerd[1726]: time="2025-02-13T20:45:50.188395008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc65467b6-ssgrs,Uid:693eb617-fd33-495b-bd76-299eb1a516ac,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6\""
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:49.968 [INFO][4636] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0 coredns-6f6b679f8f- kube-system  01611f6f-431c-448d-b299-3b089d806504 782 0 2025-02-13 20:45:14 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-4081.3.1-a-d3f644b76a  coredns-6f6b679f8f-zzzs7 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali5f8311cec0a  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzzs7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:49.968 [INFO][4636] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzzs7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.001 [INFO][4647] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" HandleID="k8s-pod-network.61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.017 [INFO][4647] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" HandleID="k8s-pod-network.61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b9ec0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-d3f644b76a", "pod":"coredns-6f6b679f8f-zzzs7", "timestamp":"2025-02-13 20:45:50.001603556 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d3f644b76a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.017 [INFO][4647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.051 [INFO][4647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.051 [INFO][4647] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d3f644b76a'
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.114 [INFO][4647] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.126 [INFO][4647] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.131 [INFO][4647] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.134 [INFO][4647] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.138 [INFO][4647] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.138 [INFO][4647] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.142 [INFO][4647] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.149 [INFO][4647] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.162 [INFO][4647] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.67/26] block=192.168.105.64/26 handle="k8s-pod-network.61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.162 [INFO][4647] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.67/26] handle="k8s-pod-network.61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.162 [INFO][4647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:50.198489 containerd[1726]: 2025-02-13 20:45:50.162 [INFO][4647] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.67/26] IPv6=[] ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" HandleID="k8s-pod-network.61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:50.199320 containerd[1726]: 2025-02-13 20:45:50.165 [INFO][4636] cni-plugin/k8s.go 386: Populated endpoint ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzzs7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"01611f6f-431c-448d-b299-3b089d806504", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"", Pod:"coredns-6f6b679f8f-zzzs7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f8311cec0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:50.199320 containerd[1726]: 2025-02-13 20:45:50.165 [INFO][4636] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.67/32] ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzzs7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:50.199320 containerd[1726]: 2025-02-13 20:45:50.165 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f8311cec0a ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzzs7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:50.199320 containerd[1726]: 2025-02-13 20:45:50.173 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzzs7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:50.199320 containerd[1726]: 2025-02-13 20:45:50.175 [INFO][4636] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzzs7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"01611f6f-431c-448d-b299-3b089d806504", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7", Pod:"coredns-6f6b679f8f-zzzs7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f8311cec0a", MAC:"d6:0f:2a:97:94:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:50.199320 containerd[1726]: 2025-02-13 20:45:50.193 [INFO][4636] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-zzzs7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:45:50.235881 containerd[1726]: time="2025-02-13T20:45:50.235606432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:50.235881 containerd[1726]: time="2025-02-13T20:45:50.235674792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:50.235881 containerd[1726]: time="2025-02-13T20:45:50.235690152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:50.235881 containerd[1726]: time="2025-02-13T20:45:50.235788232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:50.260542 systemd[1]: Started cri-containerd-61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7.scope - libcontainer container 61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7.
Feb 13 20:45:50.289529 containerd[1726]: time="2025-02-13T20:45:50.289489304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zzzs7,Uid:01611f6f-431c-448d-b299-3b089d806504,Namespace:kube-system,Attempt:1,} returns sandbox id \"61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7\""
Feb 13 20:45:50.296356 containerd[1726]: time="2025-02-13T20:45:50.296177873Z" level=info msg="CreateContainer within sandbox \"61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 20:45:50.355957 containerd[1726]: time="2025-02-13T20:45:50.355906434Z" level=info msg="CreateContainer within sandbox \"61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9661ca7075bbfc04f9450ee4cac65ae8abaf490b50a3170389d346765d29c657\""
Feb 13 20:45:50.356504 containerd[1726]: time="2025-02-13T20:45:50.356472835Z" level=info msg="StartContainer for \"9661ca7075bbfc04f9450ee4cac65ae8abaf490b50a3170389d346765d29c657\""
Feb 13 20:45:50.384532 systemd[1]: Started cri-containerd-9661ca7075bbfc04f9450ee4cac65ae8abaf490b50a3170389d346765d29c657.scope - libcontainer container 9661ca7075bbfc04f9450ee4cac65ae8abaf490b50a3170389d346765d29c657.
Feb 13 20:45:50.409941 containerd[1726]: time="2025-02-13T20:45:50.409767787Z" level=info msg="StartContainer for \"9661ca7075bbfc04f9450ee4cac65ae8abaf490b50a3170389d346765d29c657\" returns successfully"
Feb 13 20:45:50.445255 systemd-networkd[1626]: cali5dfc9c85e4e: Gained IPv6LL
Feb 13 20:45:50.746762 containerd[1726]: time="2025-02-13T20:45:50.746429681Z" level=info msg="StopPodSandbox for \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\""
Feb 13 20:45:50.747363 containerd[1726]: time="2025-02-13T20:45:50.746982842Z" level=info msg="StopPodSandbox for \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\""
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.814 [INFO][4830] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.815 [INFO][4830] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" iface="eth0" netns="/var/run/netns/cni-6b2e0547-ef78-bbe8-6d7e-2e1b909570ae"
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.816 [INFO][4830] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" iface="eth0" netns="/var/run/netns/cni-6b2e0547-ef78-bbe8-6d7e-2e1b909570ae"
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.817 [INFO][4830] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" iface="eth0" netns="/var/run/netns/cni-6b2e0547-ef78-bbe8-6d7e-2e1b909570ae"
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.817 [INFO][4830] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.817 [INFO][4830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.843 [INFO][4842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" HandleID="k8s-pod-network.e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.843 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.844 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.856 [WARNING][4842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" HandleID="k8s-pod-network.e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.856 [INFO][4842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" HandleID="k8s-pod-network.e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.859 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:50.863598 containerd[1726]: 2025-02-13 20:45:50.861 [INFO][4830] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:45:50.864915 containerd[1726]: time="2025-02-13T20:45:50.864806881Z" level=info msg="TearDown network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\" successfully"
Feb 13 20:45:50.864915 containerd[1726]: time="2025-02-13T20:45:50.864838721Z" level=info msg="StopPodSandbox for \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\" returns successfully"
Feb 13 20:45:50.865830 containerd[1726]: time="2025-02-13T20:45:50.865746442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9m58,Uid:22e5305d-78c3-49cd-bb6d-90df1e2b864e,Namespace:calico-system,Attempt:1,}"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.830 [INFO][4831] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.830 [INFO][4831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" iface="eth0" netns="/var/run/netns/cni-0273237b-2fc4-68ea-1654-fecdb466bb65"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.830 [INFO][4831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" iface="eth0" netns="/var/run/netns/cni-0273237b-2fc4-68ea-1654-fecdb466bb65"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.831 [INFO][4831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" iface="eth0" netns="/var/run/netns/cni-0273237b-2fc4-68ea-1654-fecdb466bb65"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.831 [INFO][4831] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.831 [INFO][4831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.863 [INFO][4847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" HandleID="k8s-pod-network.ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.864 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.864 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.877 [WARNING][4847] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" HandleID="k8s-pod-network.ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.878 [INFO][4847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" HandleID="k8s-pod-network.ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.882 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:50.889818 containerd[1726]: 2025-02-13 20:45:50.884 [INFO][4831] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:45:50.890190 containerd[1726]: time="2025-02-13T20:45:50.889869395Z" level=info msg="TearDown network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\" successfully"
Feb 13 20:45:50.890190 containerd[1726]: time="2025-02-13T20:45:50.889898355Z" level=info msg="StopPodSandbox for \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\" returns successfully"
Feb 13 20:45:50.891799 containerd[1726]: time="2025-02-13T20:45:50.891680237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7t4w9,Uid:a11cb515-89d7-469b-9b5c-347d66dd86cd,Namespace:kube-system,Attempt:1,}"
Feb 13 20:45:50.908862 kubelet[3141]: I0213 20:45:50.907906    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zzzs7" podStartSLOduration=36.907885139 podStartE2EDuration="36.907885139s" podCreationTimestamp="2025-02-13 20:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:50.907171898 +0000 UTC m=+43.414989043" watchObservedRunningTime="2025-02-13 20:45:50.907885139 +0000 UTC m=+43.415702284"
Feb 13 20:45:51.052170 systemd[1]: run-containerd-runc-k8s.io-61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7-runc.t75mW9.mount: Deactivated successfully.
Feb 13 20:45:51.052286 systemd[1]: run-netns-cni\x2d0273237b\x2d2fc4\x2d68ea\x2d1654\x2dfecdb466bb65.mount: Deactivated successfully.
Feb 13 20:45:51.052354 systemd[1]: run-netns-cni\x2d6b2e0547\x2def78\x2dbbe8\x2d6d7e\x2d2e1b909570ae.mount: Deactivated successfully.
Feb 13 20:45:51.087541 systemd-networkd[1626]: cali26a6a4ff500: Link UP
Feb 13 20:45:51.088189 systemd-networkd[1626]: cali26a6a4ff500: Gained carrier
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:50.976 [INFO][4859] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0 csi-node-driver- calico-system  22e5305d-78c3-49cd-bb6d-90df1e2b864e 797 0 2025-02-13 20:45:23 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  ci-4081.3.1-a-d3f644b76a  csi-node-driver-z9m58 eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali26a6a4ff500  [] []}} ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Namespace="calico-system" Pod="csi-node-driver-z9m58" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:50.976 [INFO][4859] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Namespace="calico-system" Pod="csi-node-driver-z9m58" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.018 [INFO][4885] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" HandleID="k8s-pod-network.149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.034 [INFO][4885] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" HandleID="k8s-pod-network.149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-d3f644b76a", "pod":"csi-node-driver-z9m58", "timestamp":"2025-02-13 20:45:51.018610889 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d3f644b76a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.034 [INFO][4885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.034 [INFO][4885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.034 [INFO][4885] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d3f644b76a'
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.036 [INFO][4885] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.044 [INFO][4885] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.058 [INFO][4885] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.060 [INFO][4885] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.063 [INFO][4885] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.063 [INFO][4885] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.066 [INFO][4885] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.074 [INFO][4885] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.081 [INFO][4885] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.68/26] block=192.168.105.64/26 handle="k8s-pod-network.149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.081 [INFO][4885] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.68/26] handle="k8s-pod-network.149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.081 [INFO][4885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:51.104983 containerd[1726]: 2025-02-13 20:45:51.082 [INFO][4885] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.68/26] IPv6=[] ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" HandleID="k8s-pod-network.149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:51.106119 containerd[1726]: 2025-02-13 20:45:51.084 [INFO][4859] cni-plugin/k8s.go 386: Populated endpoint ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Namespace="calico-system" Pod="csi-node-driver-z9m58" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22e5305d-78c3-49cd-bb6d-90df1e2b864e", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"", Pod:"csi-node-driver-z9m58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26a6a4ff500", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:51.106119 containerd[1726]: 2025-02-13 20:45:51.084 [INFO][4859] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.68/32] ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Namespace="calico-system" Pod="csi-node-driver-z9m58" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:51.106119 containerd[1726]: 2025-02-13 20:45:51.084 [INFO][4859] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26a6a4ff500 ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Namespace="calico-system" Pod="csi-node-driver-z9m58" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:51.106119 containerd[1726]: 2025-02-13 20:45:51.088 [INFO][4859] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Namespace="calico-system" Pod="csi-node-driver-z9m58" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:51.106119 containerd[1726]: 2025-02-13 20:45:51.089 [INFO][4859] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Namespace="calico-system" Pod="csi-node-driver-z9m58" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22e5305d-78c3-49cd-bb6d-90df1e2b864e", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd", Pod:"csi-node-driver-z9m58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26a6a4ff500", MAC:"3e:f9:de:9f:d2:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:51.106119 containerd[1726]: 2025-02-13 20:45:51.102 [INFO][4859] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd" Namespace="calico-system" Pod="csi-node-driver-z9m58" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:45:51.138829 containerd[1726]: time="2025-02-13T20:45:51.138611571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:51.138829 containerd[1726]: time="2025-02-13T20:45:51.138675411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:51.138829 containerd[1726]: time="2025-02-13T20:45:51.138698731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:51.139220 containerd[1726]: time="2025-02-13T20:45:51.138797851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:51.174547 systemd[1]: Started cri-containerd-149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd.scope - libcontainer container 149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd.
Feb 13 20:45:51.199440 systemd-networkd[1626]: cali61a1c5b64cd: Link UP
Feb 13 20:45:51.200221 systemd-networkd[1626]: cali61a1c5b64cd: Gained carrier
Feb 13 20:45:51.218613 containerd[1726]: time="2025-02-13T20:45:51.218546118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z9m58,Uid:22e5305d-78c3-49cd-bb6d-90df1e2b864e,Namespace:calico-system,Attempt:1,} returns sandbox id \"149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd\""
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.010 [INFO][4872] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0 coredns-6f6b679f8f- kube-system  a11cb515-89d7-469b-9b5c-347d66dd86cd 798 0 2025-02-13 20:45:14 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ci-4081.3.1-a-d3f644b76a  coredns-6f6b679f8f-7t4w9 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali61a1c5b64cd  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-7t4w9" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.010 [INFO][4872] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-7t4w9" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.056 [INFO][4894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" HandleID="k8s-pod-network.b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.069 [INFO][4894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" HandleID="k8s-pod-network.b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dcc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-d3f644b76a", "pod":"coredns-6f6b679f8f-7t4w9", "timestamp":"2025-02-13 20:45:51.056013099 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d3f644b76a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.069 [INFO][4894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.081 [INFO][4894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.081 [INFO][4894] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d3f644b76a'
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.139 [INFO][4894] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.151 [INFO][4894] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.159 [INFO][4894] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.166 [INFO][4894] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.170 [INFO][4894] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.170 [INFO][4894] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.173 [INFO][4894] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.183 [INFO][4894] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.191 [INFO][4894] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.69/26] block=192.168.105.64/26 handle="k8s-pod-network.b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.191 [INFO][4894] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.69/26] handle="k8s-pod-network.b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.192 [INFO][4894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:51.231537 containerd[1726]: 2025-02-13 20:45:51.192 [INFO][4894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.69/26] IPv6=[] ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" HandleID="k8s-pod-network.b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:51.232064 containerd[1726]: 2025-02-13 20:45:51.195 [INFO][4872] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-7t4w9" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a11cb515-89d7-469b-9b5c-347d66dd86cd", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"", Pod:"coredns-6f6b679f8f-7t4w9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61a1c5b64cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:51.232064 containerd[1726]: 2025-02-13 20:45:51.195 [INFO][4872] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.69/32] ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-7t4w9" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:51.232064 containerd[1726]: 2025-02-13 20:45:51.195 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61a1c5b64cd ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-7t4w9" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:51.232064 containerd[1726]: 2025-02-13 20:45:51.200 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-7t4w9" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:51.232064 containerd[1726]: 2025-02-13 20:45:51.202 [INFO][4872] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-7t4w9" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a11cb515-89d7-469b-9b5c-347d66dd86cd", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4", Pod:"coredns-6f6b679f8f-7t4w9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61a1c5b64cd", MAC:"b2:b9:2e:e9:84:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:51.232064 containerd[1726]: 2025-02-13 20:45:51.228 [INFO][4872] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4" Namespace="kube-system" Pod="coredns-6f6b679f8f-7t4w9" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:45:51.259976 containerd[1726]: time="2025-02-13T20:45:51.259300853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:51.259976 containerd[1726]: time="2025-02-13T20:45:51.259766334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:51.259976 containerd[1726]: time="2025-02-13T20:45:51.259786574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:51.259976 containerd[1726]: time="2025-02-13T20:45:51.259888814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:51.285669 systemd[1]: Started cri-containerd-b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4.scope - libcontainer container b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4.
Feb 13 20:45:51.320388 containerd[1726]: time="2025-02-13T20:45:51.318771774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7t4w9,Uid:a11cb515-89d7-469b-9b5c-347d66dd86cd,Namespace:kube-system,Attempt:1,} returns sandbox id \"b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4\""
Feb 13 20:45:51.324456 containerd[1726]: time="2025-02-13T20:45:51.324398941Z" level=info msg="CreateContainer within sandbox \"b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 20:45:51.375313 containerd[1726]: time="2025-02-13T20:45:51.375265330Z" level=info msg="CreateContainer within sandbox \"b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87a338df53654e35cf364bb697c3e44a731d9c6f9a0f7d9be5ebfcb22c6cc752\""
Feb 13 20:45:51.375976 containerd[1726]: time="2025-02-13T20:45:51.375904771Z" level=info msg="StartContainer for \"87a338df53654e35cf364bb697c3e44a731d9c6f9a0f7d9be5ebfcb22c6cc752\""
Feb 13 20:45:51.402567 systemd[1]: Started cri-containerd-87a338df53654e35cf364bb697c3e44a731d9c6f9a0f7d9be5ebfcb22c6cc752.scope - libcontainer container 87a338df53654e35cf364bb697c3e44a731d9c6f9a0f7d9be5ebfcb22c6cc752.
Feb 13 20:45:51.430236 containerd[1726]: time="2025-02-13T20:45:51.430106564Z" level=info msg="StartContainer for \"87a338df53654e35cf364bb697c3e44a731d9c6f9a0f7d9be5ebfcb22c6cc752\" returns successfully"
Feb 13 20:45:51.724493 systemd-networkd[1626]: cali5f8311cec0a: Gained IPv6LL
Feb 13 20:45:51.742210 containerd[1726]: time="2025-02-13T20:45:51.742121625Z" level=info msg="StopPodSandbox for \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\""
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.797 [INFO][5060] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.797 [INFO][5060] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" iface="eth0" netns="/var/run/netns/cni-b7532572-e483-4655-b130-89cf99dd0351"
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.798 [INFO][5060] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" iface="eth0" netns="/var/run/netns/cni-b7532572-e483-4655-b130-89cf99dd0351"
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.799 [INFO][5060] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" iface="eth0" netns="/var/run/netns/cni-b7532572-e483-4655-b130-89cf99dd0351"
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.799 [INFO][5060] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.799 [INFO][5060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.826 [INFO][5068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" HandleID="k8s-pod-network.5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.826 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.826 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.835 [WARNING][5068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" HandleID="k8s-pod-network.5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.835 [INFO][5068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" HandleID="k8s-pod-network.5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.836 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:51.839926 containerd[1726]: 2025-02-13 20:45:51.838 [INFO][5060] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:45:51.841159 containerd[1726]: time="2025-02-13T20:45:51.840548638Z" level=info msg="TearDown network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\" successfully"
Feb 13 20:45:51.841159 containerd[1726]: time="2025-02-13T20:45:51.840579798Z" level=info msg="StopPodSandbox for \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\" returns successfully"
Feb 13 20:45:51.841409 containerd[1726]: time="2025-02-13T20:45:51.841316799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c76945-fvdq7,Uid:769f719e-512b-4b14-b16e-ae7dc6a2ea08,Namespace:calico-system,Attempt:1,}"
Feb 13 20:45:51.954613 kubelet[3141]: I0213 20:45:51.954556    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7t4w9" podStartSLOduration=37.954539272 podStartE2EDuration="37.954539272s" podCreationTimestamp="2025-02-13 20:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:45:51.920104785 +0000 UTC m=+44.427921930" watchObservedRunningTime="2025-02-13 20:45:51.954539272 +0000 UTC m=+44.462356417"
Feb 13 20:45:52.050477 systemd[1]: run-netns-cni\x2db7532572\x2de483\x2d4655\x2db130\x2d89cf99dd0351.mount: Deactivated successfully.
Feb 13 20:45:52.055912 systemd-networkd[1626]: calie9f15e49129: Link UP
Feb 13 20:45:52.057018 systemd-networkd[1626]: calie9f15e49129: Gained carrier
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:51.960 [INFO][5075] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0 calico-kube-controllers-65c76945- calico-system  769f719e-512b-4b14-b16e-ae7dc6a2ea08 820 0 2025-02-13 20:45:24 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65c76945 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  ci-4081.3.1-a-d3f644b76a  calico-kube-controllers-65c76945-fvdq7 eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] calie9f15e49129  [] []}} ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Namespace="calico-system" Pod="calico-kube-controllers-65c76945-fvdq7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:51.960 [INFO][5075] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Namespace="calico-system" Pod="calico-kube-controllers-65c76945-fvdq7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.001 [INFO][5088] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" HandleID="k8s-pod-network.de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.013 [INFO][5088] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" HandleID="k8s-pod-network.de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-d3f644b76a", "pod":"calico-kube-controllers-65c76945-fvdq7", "timestamp":"2025-02-13 20:45:52.001026455 +0000 UTC"}, Hostname:"ci-4081.3.1-a-d3f644b76a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.013 [INFO][5088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.013 [INFO][5088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.013 [INFO][5088] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-d3f644b76a'
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.016 [INFO][5088] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.020 [INFO][5088] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.024 [INFO][5088] ipam/ipam.go 489: Trying affinity for 192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.026 [INFO][5088] ipam/ipam.go 155: Attempting to load block cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.028 [INFO][5088] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.64/26 host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.028 [INFO][5088] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.64/26 handle="k8s-pod-network.de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.030 [INFO][5088] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.034 [INFO][5088] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.105.64/26 handle="k8s-pod-network.de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.049 [INFO][5088] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.105.70/26] block=192.168.105.64/26 handle="k8s-pod-network.de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.049 [INFO][5088] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.70/26] handle="k8s-pod-network.de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" host="ci-4081.3.1-a-d3f644b76a"
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.049 [INFO][5088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:45:52.077642 containerd[1726]: 2025-02-13 20:45:52.049 [INFO][5088] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.70/26] IPv6=[] ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" HandleID="k8s-pod-network.de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:52.078407 containerd[1726]: 2025-02-13 20:45:52.051 [INFO][5075] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Namespace="calico-system" Pod="calico-kube-controllers-65c76945-fvdq7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0", GenerateName:"calico-kube-controllers-65c76945-", Namespace:"calico-system", SelfLink:"", UID:"769f719e-512b-4b14-b16e-ae7dc6a2ea08", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 24, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65c76945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"", Pod:"calico-kube-controllers-65c76945-fvdq7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9f15e49129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:52.078407 containerd[1726]: 2025-02-13 20:45:52.051 [INFO][5075] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.105.70/32] ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Namespace="calico-system" Pod="calico-kube-controllers-65c76945-fvdq7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:52.078407 containerd[1726]: 2025-02-13 20:45:52.051 [INFO][5075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9f15e49129 ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Namespace="calico-system" Pod="calico-kube-controllers-65c76945-fvdq7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:52.078407 containerd[1726]: 2025-02-13 20:45:52.057 [INFO][5075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Namespace="calico-system" Pod="calico-kube-controllers-65c76945-fvdq7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:52.078407 containerd[1726]: 2025-02-13 20:45:52.058 [INFO][5075] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Namespace="calico-system" Pod="calico-kube-controllers-65c76945-fvdq7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0", GenerateName:"calico-kube-controllers-65c76945-", Namespace:"calico-system", SelfLink:"", UID:"769f719e-512b-4b14-b16e-ae7dc6a2ea08", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 24, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65c76945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d", Pod:"calico-kube-controllers-65c76945-fvdq7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9f15e49129", MAC:"f2:aa:60:fd:5b:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:45:52.078407 containerd[1726]: 2025-02-13 20:45:52.074 [INFO][5075] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d" Namespace="calico-system" Pod="calico-kube-controllers-65c76945-fvdq7" WorkloadEndpoint="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:45:52.106412 containerd[1726]: time="2025-02-13T20:45:52.106031436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 20:45:52.106412 containerd[1726]: time="2025-02-13T20:45:52.106087437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 20:45:52.106412 containerd[1726]: time="2025-02-13T20:45:52.106146797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:52.106412 containerd[1726]: time="2025-02-13T20:45:52.106239437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 20:45:52.108586 systemd-networkd[1626]: cali878deea835e: Gained IPv6LL
Feb 13 20:45:52.133535 systemd[1]: Started cri-containerd-de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d.scope - libcontainer container de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d.
Feb 13 20:45:52.167675 containerd[1726]: time="2025-02-13T20:45:52.167628760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c76945-fvdq7,Uid:769f719e-512b-4b14-b16e-ae7dc6a2ea08,Namespace:calico-system,Attempt:1,} returns sandbox id \"de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d\""
Feb 13 20:45:52.172575 systemd-networkd[1626]: cali26a6a4ff500: Gained IPv6LL
Feb 13 20:45:52.940637 systemd-networkd[1626]: cali61a1c5b64cd: Gained IPv6LL
Feb 13 20:45:53.325020 systemd-networkd[1626]: calie9f15e49129: Gained IPv6LL
Feb 13 20:45:53.744863 containerd[1726]: time="2025-02-13T20:45:53.736698478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:53.744863 containerd[1726]: time="2025-02-13T20:45:53.739832202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409"
Feb 13 20:45:53.747147 containerd[1726]: time="2025-02-13T20:45:53.747095332Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:53.752990 containerd[1726]: time="2025-02-13T20:45:53.752926820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:53.755009 containerd[1726]: time="2025-02-13T20:45:53.754946222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.389050164s"
Feb 13 20:45:53.755009 containerd[1726]: time="2025-02-13T20:45:53.754991542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\""
Feb 13 20:45:53.756209 containerd[1726]: time="2025-02-13T20:45:53.756175224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Feb 13 20:45:53.759724 containerd[1726]: time="2025-02-13T20:45:53.759587949Z" level=info msg="CreateContainer within sandbox \"4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Feb 13 20:45:53.802339 containerd[1726]: time="2025-02-13T20:45:53.802179606Z" level=info msg="CreateContainer within sandbox \"4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"65846c54b38550f2d42b9417bcd268bc5cffff6fd11438fd0d4a2de0f4ea8f80\""
Feb 13 20:45:53.805368 containerd[1726]: time="2025-02-13T20:45:53.803409568Z" level=info msg="StartContainer for \"65846c54b38550f2d42b9417bcd268bc5cffff6fd11438fd0d4a2de0f4ea8f80\""
Feb 13 20:45:53.858542 systemd[1]: Started cri-containerd-65846c54b38550f2d42b9417bcd268bc5cffff6fd11438fd0d4a2de0f4ea8f80.scope - libcontainer container 65846c54b38550f2d42b9417bcd268bc5cffff6fd11438fd0d4a2de0f4ea8f80.
Feb 13 20:45:53.907307 containerd[1726]: time="2025-02-13T20:45:53.907222188Z" level=info msg="StartContainer for \"65846c54b38550f2d42b9417bcd268bc5cffff6fd11438fd0d4a2de0f4ea8f80\" returns successfully"
Feb 13 20:45:54.210875 containerd[1726]: time="2025-02-13T20:45:54.210824598Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:54.216083 containerd[1726]: time="2025-02-13T20:45:54.216044645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77"
Feb 13 20:45:54.219070 containerd[1726]: time="2025-02-13T20:45:54.218982249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 462.770345ms"
Feb 13 20:45:54.219135 containerd[1726]: time="2025-02-13T20:45:54.219076409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\""
Feb 13 20:45:54.221554 containerd[1726]: time="2025-02-13T20:45:54.221518772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Feb 13 20:45:54.223621 containerd[1726]: time="2025-02-13T20:45:54.223585615Z" level=info msg="CreateContainer within sandbox \"2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Feb 13 20:45:54.282187 containerd[1726]: time="2025-02-13T20:45:54.282135414Z" level=info msg="CreateContainer within sandbox \"2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3ac01df8bf47301c8ce4a18c63d29c1fbc3afbfa52badcf80e195da56d9535cc\""
Feb 13 20:45:54.283920 containerd[1726]: time="2025-02-13T20:45:54.282751015Z" level=info msg="StartContainer for \"3ac01df8bf47301c8ce4a18c63d29c1fbc3afbfa52badcf80e195da56d9535cc\""
Feb 13 20:45:54.318699 systemd[1]: Started cri-containerd-3ac01df8bf47301c8ce4a18c63d29c1fbc3afbfa52badcf80e195da56d9535cc.scope - libcontainer container 3ac01df8bf47301c8ce4a18c63d29c1fbc3afbfa52badcf80e195da56d9535cc.
Feb 13 20:45:54.465397 containerd[1726]: time="2025-02-13T20:45:54.463909899Z" level=info msg="StartContainer for \"3ac01df8bf47301c8ce4a18c63d29c1fbc3afbfa52badcf80e195da56d9535cc\" returns successfully"
Feb 13 20:45:54.927918 kubelet[3141]: I0213 20:45:54.927883    3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 20:45:54.948679 kubelet[3141]: I0213 20:45:54.948041    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bc65467b6-c2v2b" podStartSLOduration=27.557809627 podStartE2EDuration="31.948023833s" podCreationTimestamp="2025-02-13 20:45:23 +0000 UTC" firstStartedPulling="2025-02-13 20:45:49.365536657 +0000 UTC m=+41.873353802" lastFinishedPulling="2025-02-13 20:45:53.755750863 +0000 UTC m=+46.263568008" observedRunningTime="2025-02-13 20:45:53.945428679 +0000 UTC m=+46.453245784" watchObservedRunningTime="2025-02-13 20:45:54.948023833 +0000 UTC m=+47.455840978"
Feb 13 20:45:55.930203 kubelet[3141]: I0213 20:45:55.930132    3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 20:45:56.799053 containerd[1726]: time="2025-02-13T20:45:56.798547406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:56.803019 containerd[1726]: time="2025-02-13T20:45:56.802854411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730"
Feb 13 20:45:56.808161 containerd[1726]: time="2025-02-13T20:45:56.808090497Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:56.815844 containerd[1726]: time="2025-02-13T20:45:56.815793505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:56.817071 containerd[1726]: time="2025-02-13T20:45:56.816689226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.595132574s"
Feb 13 20:45:56.817071 containerd[1726]: time="2025-02-13T20:45:56.816723587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\""
Feb 13 20:45:56.820409 containerd[1726]: time="2025-02-13T20:45:56.820360151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\""
Feb 13 20:45:56.822619 containerd[1726]: time="2025-02-13T20:45:56.822550873Z" level=info msg="CreateContainer within sandbox \"149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Feb 13 20:45:56.867625 containerd[1726]: time="2025-02-13T20:45:56.867563004Z" level=info msg="CreateContainer within sandbox \"149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b8a49ad2c92d91437342ac4f748fcc70bb488d85b51fa4e264a063626d99c3ae\""
Feb 13 20:45:56.868282 containerd[1726]: time="2025-02-13T20:45:56.868251045Z" level=info msg="StartContainer for \"b8a49ad2c92d91437342ac4f748fcc70bb488d85b51fa4e264a063626d99c3ae\""
Feb 13 20:45:56.905583 systemd[1]: Started cri-containerd-b8a49ad2c92d91437342ac4f748fcc70bb488d85b51fa4e264a063626d99c3ae.scope - libcontainer container b8a49ad2c92d91437342ac4f748fcc70bb488d85b51fa4e264a063626d99c3ae.
Feb 13 20:45:56.933864 containerd[1726]: time="2025-02-13T20:45:56.933689480Z" level=info msg="StartContainer for \"b8a49ad2c92d91437342ac4f748fcc70bb488d85b51fa4e264a063626d99c3ae\" returns successfully"
Feb 13 20:45:58.612440 containerd[1726]: time="2025-02-13T20:45:58.612385471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:58.615380 containerd[1726]: time="2025-02-13T20:45:58.615199794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828"
Feb 13 20:45:58.620141 containerd[1726]: time="2025-02-13T20:45:58.620062720Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:58.627875 containerd[1726]: time="2025-02-13T20:45:58.627808609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:45:58.628860 containerd[1726]: time="2025-02-13T20:45:58.628475009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.808066418s"
Feb 13 20:45:58.628860 containerd[1726]: time="2025-02-13T20:45:58.628513129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\""
Feb 13 20:45:58.630138 containerd[1726]: time="2025-02-13T20:45:58.630101891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Feb 13 20:45:58.661561 containerd[1726]: time="2025-02-13T20:45:58.661514207Z" level=info msg="CreateContainer within sandbox \"de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Feb 13 20:45:58.702546 containerd[1726]: time="2025-02-13T20:45:58.702501414Z" level=info msg="CreateContainer within sandbox \"de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e8a4a3aea9522d066b7402af6acbb6d1b79ee252adbd632efda80167b6c1fb03\""
Feb 13 20:45:58.703884 containerd[1726]: time="2025-02-13T20:45:58.703487055Z" level=info msg="StartContainer for \"e8a4a3aea9522d066b7402af6acbb6d1b79ee252adbd632efda80167b6c1fb03\""
Feb 13 20:45:58.737551 systemd[1]: Started cri-containerd-e8a4a3aea9522d066b7402af6acbb6d1b79ee252adbd632efda80167b6c1fb03.scope - libcontainer container e8a4a3aea9522d066b7402af6acbb6d1b79ee252adbd632efda80167b6c1fb03.
Feb 13 20:45:58.772423 containerd[1726]: time="2025-02-13T20:45:58.772374893Z" level=info msg="StartContainer for \"e8a4a3aea9522d066b7402af6acbb6d1b79ee252adbd632efda80167b6c1fb03\" returns successfully"
Feb 13 20:45:58.964223 kubelet[3141]: I0213 20:45:58.963998    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65c76945-fvdq7" podStartSLOduration=28.504526024 podStartE2EDuration="34.963978831s" podCreationTimestamp="2025-02-13 20:45:24 +0000 UTC" firstStartedPulling="2025-02-13 20:45:52.169896643 +0000 UTC m=+44.677713748" lastFinishedPulling="2025-02-13 20:45:58.62934941 +0000 UTC m=+51.137166555" observedRunningTime="2025-02-13 20:45:58.958965706 +0000 UTC m=+51.466782851" watchObservedRunningTime="2025-02-13 20:45:58.963978831 +0000 UTC m=+51.471795976"
Feb 13 20:45:58.964799 kubelet[3141]: I0213 20:45:58.964722    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bc65467b6-ssgrs" podStartSLOduration=31.934521352 podStartE2EDuration="35.964712192s" podCreationTimestamp="2025-02-13 20:45:23 +0000 UTC" firstStartedPulling="2025-02-13 20:45:50.190357371 +0000 UTC m=+42.698174516" lastFinishedPulling="2025-02-13 20:45:54.220548131 +0000 UTC m=+46.728365356" observedRunningTime="2025-02-13 20:45:54.951357837 +0000 UTC m=+47.459175022" watchObservedRunningTime="2025-02-13 20:45:58.964712192 +0000 UTC m=+51.472529337"
Feb 13 20:45:59.946127 kubelet[3141]: I0213 20:45:59.946000    3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 20:46:00.063586 containerd[1726]: time="2025-02-13T20:46:00.063535283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:46:00.067181 containerd[1726]: time="2025-02-13T20:46:00.067132647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368"
Feb 13 20:46:00.072585 containerd[1726]: time="2025-02-13T20:46:00.072526894Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:46:00.077842 containerd[1726]: time="2025-02-13T20:46:00.077799140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 20:46:00.078514 containerd[1726]: time="2025-02-13T20:46:00.078374860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.448232929s"
Feb 13 20:46:00.078514 containerd[1726]: time="2025-02-13T20:46:00.078411380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\""
Feb 13 20:46:00.081958 containerd[1726]: time="2025-02-13T20:46:00.081760224Z" level=info msg="CreateContainer within sandbox \"149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Feb 13 20:46:00.126496 containerd[1726]: time="2025-02-13T20:46:00.126445195Z" level=info msg="CreateContainer within sandbox \"149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"24748d9222d0994ff599c649ed9521316befa78f5a52adb20818e440050068a7\""
Feb 13 20:46:00.127258 containerd[1726]: time="2025-02-13T20:46:00.127212996Z" level=info msg="StartContainer for \"24748d9222d0994ff599c649ed9521316befa78f5a52adb20818e440050068a7\""
Feb 13 20:46:00.175524 systemd[1]: Started cri-containerd-24748d9222d0994ff599c649ed9521316befa78f5a52adb20818e440050068a7.scope - libcontainer container 24748d9222d0994ff599c649ed9521316befa78f5a52adb20818e440050068a7.
Feb 13 20:46:00.209544 containerd[1726]: time="2025-02-13T20:46:00.208771569Z" level=info msg="StartContainer for \"24748d9222d0994ff599c649ed9521316befa78f5a52adb20818e440050068a7\" returns successfully"
Feb 13 20:46:00.821535 kubelet[3141]: I0213 20:46:00.821397    3141 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Feb 13 20:46:00.821535 kubelet[3141]: I0213 20:46:00.821442    3141 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Feb 13 20:46:04.896015 kubelet[3141]: I0213 20:46:04.895933    3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z9m58" podStartSLOduration=33.036926936 podStartE2EDuration="41.895914316s" podCreationTimestamp="2025-02-13 20:45:23 +0000 UTC" firstStartedPulling="2025-02-13 20:45:51.220475441 +0000 UTC m=+43.728292586" lastFinishedPulling="2025-02-13 20:46:00.079462821 +0000 UTC m=+52.587279966" observedRunningTime="2025-02-13 20:46:00.969123154 +0000 UTC m=+53.476940299" watchObservedRunningTime="2025-02-13 20:46:04.895914316 +0000 UTC m=+57.403731421"
Feb 13 20:46:10.701955 containerd[1726]: time="2025-02-13T20:46:10.701913652Z" level=info msg="StopPodSandbox for \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\""
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.747 [WARNING][5481] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0", GenerateName:"calico-apiserver-7bc65467b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"693eb617-fd33-495b-bd76-299eb1a516ac", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc65467b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6", Pod:"calico-apiserver-7bc65467b6-ssgrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali878deea835e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.748 [INFO][5481] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.748 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" iface="eth0" netns=""
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.749 [INFO][5481] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.749 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.769 [INFO][5489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" HandleID="k8s-pod-network.99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.769 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.769 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.778 [WARNING][5489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" HandleID="k8s-pod-network.99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.778 [INFO][5489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" HandleID="k8s-pod-network.99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.779 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:10.783825 containerd[1726]: 2025-02-13 20:46:10.781 [INFO][5481] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:46:10.783825 containerd[1726]: time="2025-02-13T20:46:10.783574281Z" level=info msg="TearDown network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\" successfully"
Feb 13 20:46:10.783825 containerd[1726]: time="2025-02-13T20:46:10.783600161Z" level=info msg="StopPodSandbox for \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\" returns successfully"
Feb 13 20:46:10.784903 containerd[1726]: time="2025-02-13T20:46:10.784539082Z" level=info msg="RemovePodSandbox for \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\""
Feb 13 20:46:10.787203 containerd[1726]: time="2025-02-13T20:46:10.787155845Z" level=info msg="Forcibly stopping sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\""
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.845 [WARNING][5507] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0", GenerateName:"calico-apiserver-7bc65467b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"693eb617-fd33-495b-bd76-299eb1a516ac", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc65467b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"2064cb09b9fab0ca98a3d8cc71d0de67d85cfd3d7dfb92d175c6f588fe4ad6d6", Pod:"calico-apiserver-7bc65467b6-ssgrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali878deea835e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.846 [INFO][5507] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.846 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" iface="eth0" netns=""
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.846 [INFO][5507] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.846 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.872 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" HandleID="k8s-pod-network.99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.872 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.872 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.883 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" HandleID="k8s-pod-network.99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.883 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" HandleID="k8s-pod-network.99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--ssgrs-eth0"
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.887 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:10.892686 containerd[1726]: 2025-02-13 20:46:10.890 [INFO][5507] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423"
Feb 13 20:46:10.895253 containerd[1726]: time="2025-02-13T20:46:10.892730586Z" level=info msg="TearDown network for sandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\" successfully"
Feb 13 20:46:10.924107 containerd[1726]: time="2025-02-13T20:46:10.924060348Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 20:46:10.924228 containerd[1726]: time="2025-02-13T20:46:10.924136268Z" level=info msg="RemovePodSandbox \"99ca5c1233316e1e37b4a256b704be49d24e417fd9666f6198ed551139b69423\" returns successfully"
Feb 13 20:46:10.925112 containerd[1726]: time="2025-02-13T20:46:10.925068669Z" level=info msg="StopPodSandbox for \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\""
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:10.974 [WARNING][5531] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a11cb515-89d7-469b-9b5c-347d66dd86cd", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4", Pod:"coredns-6f6b679f8f-7t4w9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61a1c5b64cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:10.975 [INFO][5531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:10.975 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" iface="eth0" netns=""
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:10.975 [INFO][5531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:10.975 [INFO][5531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:11.001 [INFO][5537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" HandleID="k8s-pod-network.ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:11.001 [INFO][5537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:11.001 [INFO][5537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:11.010 [WARNING][5537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" HandleID="k8s-pod-network.ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:11.010 [INFO][5537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" HandleID="k8s-pod-network.ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:11.011 [INFO][5537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.014650 containerd[1726]: 2025-02-13 20:46:11.013 [INFO][5531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:46:11.015661 containerd[1726]: time="2025-02-13T20:46:11.015430269Z" level=info msg="TearDown network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\" successfully"
Feb 13 20:46:11.015661 containerd[1726]: time="2025-02-13T20:46:11.015521590Z" level=info msg="StopPodSandbox for \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\" returns successfully"
Feb 13 20:46:11.016025 containerd[1726]: time="2025-02-13T20:46:11.015993830Z" level=info msg="RemovePodSandbox for \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\""
Feb 13 20:46:11.016075 containerd[1726]: time="2025-02-13T20:46:11.016032350Z" level=info msg="Forcibly stopping sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\""
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.058 [WARNING][5556] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a11cb515-89d7-469b-9b5c-347d66dd86cd", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"b41de00624b6d30caee66c201aeabf777c3446913928602d9be08cd30fb9b7b4", Pod:"coredns-6f6b679f8f-7t4w9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61a1c5b64cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.058 [INFO][5556] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.058 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" iface="eth0" netns=""
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.058 [INFO][5556] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.058 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.077 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" HandleID="k8s-pod-network.ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.077 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.077 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.087 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" HandleID="k8s-pod-network.ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.087 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" HandleID="k8s-pod-network.ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--7t4w9-eth0"
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.089 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.092465 containerd[1726]: 2025-02-13 20:46:11.090 [INFO][5556] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b"
Feb 13 20:46:11.092880 containerd[1726]: time="2025-02-13T20:46:11.092478772Z" level=info msg="TearDown network for sandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\" successfully"
Feb 13 20:46:11.116531 containerd[1726]: time="2025-02-13T20:46:11.116453084Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 20:46:11.116654 containerd[1726]: time="2025-02-13T20:46:11.116583364Z" level=info msg="RemovePodSandbox \"ec8afbe92bd7f90ac7617d9304125fb164dfbff3fa928cdb0e3148246fec7a1b\" returns successfully"
Feb 13 20:46:11.117151 containerd[1726]: time="2025-02-13T20:46:11.117124085Z" level=info msg="StopPodSandbox for \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\""
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.162 [WARNING][5581] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0", GenerateName:"calico-apiserver-7bc65467b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"94141c30-c43e-4a2f-8964-6298496fe9ed", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc65467b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32", Pod:"calico-apiserver-7bc65467b6-c2v2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5dfc9c85e4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.162 [INFO][5581] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.162 [INFO][5581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" iface="eth0" netns=""
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.162 [INFO][5581] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.162 [INFO][5581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.186 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" HandleID="k8s-pod-network.d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.186 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.186 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.198 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" HandleID="k8s-pod-network.d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.198 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" HandleID="k8s-pod-network.d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.203 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.208013 containerd[1726]: 2025-02-13 20:46:11.206 [INFO][5581] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:46:11.209195 containerd[1726]: time="2025-02-13T20:46:11.208050566Z" level=info msg="TearDown network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\" successfully"
Feb 13 20:46:11.209195 containerd[1726]: time="2025-02-13T20:46:11.208077366Z" level=info msg="StopPodSandbox for \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\" returns successfully"
Feb 13 20:46:11.209195 containerd[1726]: time="2025-02-13T20:46:11.209043247Z" level=info msg="RemovePodSandbox for \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\""
Feb 13 20:46:11.209195 containerd[1726]: time="2025-02-13T20:46:11.209119367Z" level=info msg="Forcibly stopping sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\""
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.249 [WARNING][5605] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0", GenerateName:"calico-apiserver-7bc65467b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"94141c30-c43e-4a2f-8964-6298496fe9ed", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc65467b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"4fa1bcd8401cce968118659f1ad25e97769dacb9c95ee15ed9259d845d4b5b32", Pod:"calico-apiserver-7bc65467b6-c2v2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5dfc9c85e4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.250 [INFO][5605] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.250 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" iface="eth0" netns=""
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.250 [INFO][5605] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.250 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.279 [INFO][5611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" HandleID="k8s-pod-network.d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.280 [INFO][5611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.280 [INFO][5611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.292 [WARNING][5611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" HandleID="k8s-pod-network.d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.292 [INFO][5611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" HandleID="k8s-pod-network.d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--apiserver--7bc65467b6--c2v2b-eth0"
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.295 [INFO][5611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.299792 containerd[1726]: 2025-02-13 20:46:11.297 [INFO][5605] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5"
Feb 13 20:46:11.301287 containerd[1726]: time="2025-02-13T20:46:11.301124530Z" level=info msg="TearDown network for sandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\" successfully"
Feb 13 20:46:11.309432 containerd[1726]: time="2025-02-13T20:46:11.309379661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 20:46:11.309562 containerd[1726]: time="2025-02-13T20:46:11.309466701Z" level=info msg="RemovePodSandbox \"d050459dc15b43cf6c825df5e383e4a63aaecd7734ac3cce09d029f9d0d6e8c5\" returns successfully"
Feb 13 20:46:11.310350 containerd[1726]: time="2025-02-13T20:46:11.310065182Z" level=info msg="StopPodSandbox for \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\""
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.348 [WARNING][5629] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"01611f6f-431c-448d-b299-3b089d806504", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7", Pod:"coredns-6f6b679f8f-zzzs7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f8311cec0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.349 [INFO][5629] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.349 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" iface="eth0" netns=""
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.349 [INFO][5629] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.349 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.373 [INFO][5635] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" HandleID="k8s-pod-network.874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.374 [INFO][5635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.374 [INFO][5635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.383 [WARNING][5635] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" HandleID="k8s-pod-network.874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.383 [INFO][5635] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" HandleID="k8s-pod-network.874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.386 [INFO][5635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.390115 containerd[1726]: 2025-02-13 20:46:11.388 [INFO][5629] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:46:11.390558 containerd[1726]: time="2025-02-13T20:46:11.390196009Z" level=info msg="TearDown network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\" successfully"
Feb 13 20:46:11.390558 containerd[1726]: time="2025-02-13T20:46:11.390220809Z" level=info msg="StopPodSandbox for \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\" returns successfully"
Feb 13 20:46:11.391230 containerd[1726]: time="2025-02-13T20:46:11.390711769Z" level=info msg="RemovePodSandbox for \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\""
Feb 13 20:46:11.391230 containerd[1726]: time="2025-02-13T20:46:11.390747889Z" level=info msg="Forcibly stopping sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\""
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.426 [WARNING][5653] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"01611f6f-431c-448d-b299-3b089d806504", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"61ea66a62e76c7c293e37992434231548d8fde2591c980591e6c38f18498c0a7", Pod:"coredns-6f6b679f8f-zzzs7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f8311cec0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.427 [INFO][5653] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.427 [INFO][5653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" iface="eth0" netns=""
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.427 [INFO][5653] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.427 [INFO][5653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.459 [INFO][5661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" HandleID="k8s-pod-network.874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.459 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.460 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.474 [WARNING][5661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" HandleID="k8s-pod-network.874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.475 [INFO][5661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" HandleID="k8s-pod-network.874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c" Workload="ci--4081.3.1--a--d3f644b76a-k8s-coredns--6f6b679f8f--zzzs7-eth0"
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.484 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.488371 containerd[1726]: 2025-02-13 20:46:11.486 [INFO][5653] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c"
Feb 13 20:46:11.488371 containerd[1726]: time="2025-02-13T20:46:11.488086939Z" level=info msg="TearDown network for sandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\" successfully"
Feb 13 20:46:11.500594 containerd[1726]: time="2025-02-13T20:46:11.500364795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 20:46:11.500594 containerd[1726]: time="2025-02-13T20:46:11.500490596Z" level=info msg="RemovePodSandbox \"874cd36fd8837398df749320b31dd445cd4c3f38ff1c207eaed07a3329f0394c\" returns successfully"
Feb 13 20:46:11.501379 containerd[1726]: time="2025-02-13T20:46:11.500943636Z" level=info msg="StopPodSandbox for \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\""
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.565 [WARNING][5680] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22e5305d-78c3-49cd-bb6d-90df1e2b864e", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd", Pod:"csi-node-driver-z9m58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26a6a4ff500", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.565 [INFO][5680] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.565 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" iface="eth0" netns=""
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.565 [INFO][5680] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.565 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.584 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" HandleID="k8s-pod-network.e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.584 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.584 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.598 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" HandleID="k8s-pod-network.e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.598 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" HandleID="k8s-pod-network.e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.600 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.604127 containerd[1726]: 2025-02-13 20:46:11.602 [INFO][5680] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:46:11.605390 containerd[1726]: time="2025-02-13T20:46:11.604176494Z" level=info msg="TearDown network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\" successfully"
Feb 13 20:46:11.605390 containerd[1726]: time="2025-02-13T20:46:11.604201094Z" level=info msg="StopPodSandbox for \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\" returns successfully"
Feb 13 20:46:11.605390 containerd[1726]: time="2025-02-13T20:46:11.605124655Z" level=info msg="RemovePodSandbox for \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\""
Feb 13 20:46:11.605390 containerd[1726]: time="2025-02-13T20:46:11.605155255Z" level=info msg="Forcibly stopping sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\""
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.647 [WARNING][5704] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22e5305d-78c3-49cd-bb6d-90df1e2b864e", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"149c102437ddfc942649da839827e1e0e631595ba56aafc88c3809861eaf66fd", Pod:"csi-node-driver-z9m58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26a6a4ff500", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.647 [INFO][5704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.647 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" iface="eth0" netns=""
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.647 [INFO][5704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.647 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.673 [INFO][5710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" HandleID="k8s-pod-network.e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.674 [INFO][5710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.674 [INFO][5710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.685 [WARNING][5710] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" HandleID="k8s-pod-network.e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.685 [INFO][5710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" HandleID="k8s-pod-network.e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1" Workload="ci--4081.3.1--a--d3f644b76a-k8s-csi--node--driver--z9m58-eth0"
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.687 [INFO][5710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.694152 containerd[1726]: 2025-02-13 20:46:11.691 [INFO][5704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1"
Feb 13 20:46:11.694581 containerd[1726]: time="2025-02-13T20:46:11.694192654Z" level=info msg="TearDown network for sandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\" successfully"
Feb 13 20:46:11.703580 containerd[1726]: time="2025-02-13T20:46:11.703525226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 20:46:11.703923 containerd[1726]: time="2025-02-13T20:46:11.703604506Z" level=info msg="RemovePodSandbox \"e1757aaadae691f8b88259849a4889c8f0e04577a8eea7fc9c9db623700faaf1\" returns successfully"
Feb 13 20:46:11.705392 containerd[1726]: time="2025-02-13T20:46:11.704500227Z" level=info msg="StopPodSandbox for \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\""
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.743 [WARNING][5728] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0", GenerateName:"calico-kube-controllers-65c76945-", Namespace:"calico-system", SelfLink:"", UID:"769f719e-512b-4b14-b16e-ae7dc6a2ea08", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 24, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65c76945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d", Pod:"calico-kube-controllers-65c76945-fvdq7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9f15e49129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.743 [INFO][5728] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.743 [INFO][5728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" iface="eth0" netns=""
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.743 [INFO][5728] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.743 [INFO][5728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.762 [INFO][5734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" HandleID="k8s-pod-network.5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.762 [INFO][5734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.762 [INFO][5734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.770 [WARNING][5734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" HandleID="k8s-pod-network.5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.771 [INFO][5734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" HandleID="k8s-pod-network.5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.772 [INFO][5734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.774994 containerd[1726]: 2025-02-13 20:46:11.773 [INFO][5728] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:46:11.775642 containerd[1726]: time="2025-02-13T20:46:11.775034321Z" level=info msg="TearDown network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\" successfully"
Feb 13 20:46:11.775642 containerd[1726]: time="2025-02-13T20:46:11.775060281Z" level=info msg="StopPodSandbox for \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\" returns successfully"
Feb 13 20:46:11.776399 containerd[1726]: time="2025-02-13T20:46:11.776068963Z" level=info msg="RemovePodSandbox for \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\""
Feb 13 20:46:11.776399 containerd[1726]: time="2025-02-13T20:46:11.776101723Z" level=info msg="Forcibly stopping sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\""
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.813 [WARNING][5753] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0", GenerateName:"calico-kube-controllers-65c76945-", Namespace:"calico-system", SelfLink:"", UID:"769f719e-512b-4b14-b16e-ae7dc6a2ea08", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 45, 24, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65c76945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-d3f644b76a", ContainerID:"de554bac56772affcf6268d41a6b44bb5386cb32e38f048625b45a8dfbc3760d", Pod:"calico-kube-controllers-65c76945-fvdq7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9f15e49129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.814 [INFO][5753] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.814 [INFO][5753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" iface="eth0" netns=""
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.814 [INFO][5753] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.814 [INFO][5753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.835 [INFO][5759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" HandleID="k8s-pod-network.5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.835 [INFO][5759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.835 [INFO][5759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.845 [WARNING][5759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" HandleID="k8s-pod-network.5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.845 [INFO][5759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" HandleID="k8s-pod-network.5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f" Workload="ci--4081.3.1--a--d3f644b76a-k8s-calico--kube--controllers--65c76945--fvdq7-eth0"
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.847 [INFO][5759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 20:46:11.851616 containerd[1726]: 2025-02-13 20:46:11.849 [INFO][5753] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f"
Feb 13 20:46:11.852345 containerd[1726]: time="2025-02-13T20:46:11.851922864Z" level=info msg="TearDown network for sandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\" successfully"
Feb 13 20:46:11.864488 containerd[1726]: time="2025-02-13T20:46:11.864356560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 20:46:11.864784 containerd[1726]: time="2025-02-13T20:46:11.864664561Z" level=info msg="RemovePodSandbox \"5730a3bc9f18e3e4ed8480ce071d4d8b7f8faeb817bc093eee36e632123e500f\" returns successfully"
Feb 13 20:46:29.631159 kubelet[3141]: I0213 20:46:29.630775    3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 20:46:30.195870 kubelet[3141]: I0213 20:46:30.195461    3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 20:47:09.193557 systemd[1]: Started sshd@7-10.200.20.20:22-10.200.16.10:41708.service - OpenSSH per-connection server daemon (10.200.16.10:41708).
Feb 13 20:47:09.649620 sshd[5896]: Accepted publickey for core from 10.200.16.10 port 41708 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:09.651749 sshd[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:09.656536 systemd-logind[1698]: New session 10 of user core.
Feb 13 20:47:09.661553 systemd[1]: Started session-10.scope - Session 10 of User core.
Feb 13 20:47:10.049227 sshd[5896]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:10.052424 systemd[1]: sshd@7-10.200.20.20:22-10.200.16.10:41708.service: Deactivated successfully.
Feb 13 20:47:10.055113 systemd[1]: session-10.scope: Deactivated successfully.
Feb 13 20:47:10.056664 systemd-logind[1698]: Session 10 logged out. Waiting for processes to exit.
Feb 13 20:47:10.058086 systemd-logind[1698]: Removed session 10.
Feb 13 20:47:15.134586 systemd[1]: Started sshd@8-10.200.20.20:22-10.200.16.10:41724.service - OpenSSH per-connection server daemon (10.200.16.10:41724).
Feb 13 20:47:15.574201 sshd[5915]: Accepted publickey for core from 10.200.16.10 port 41724 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:15.575953 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:15.580963 systemd-logind[1698]: New session 11 of user core.
Feb 13 20:47:15.587547 systemd[1]: Started session-11.scope - Session 11 of User core.
Feb 13 20:47:15.975574 sshd[5915]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:15.979687 systemd-logind[1698]: Session 11 logged out. Waiting for processes to exit.
Feb 13 20:47:15.979693 systemd[1]: sshd@8-10.200.20.20:22-10.200.16.10:41724.service: Deactivated successfully.
Feb 13 20:47:15.981876 systemd[1]: session-11.scope: Deactivated successfully.
Feb 13 20:47:15.983439 systemd-logind[1698]: Removed session 11.
Feb 13 20:47:21.064441 systemd[1]: Started sshd@9-10.200.20.20:22-10.200.16.10:38094.service - OpenSSH per-connection server daemon (10.200.16.10:38094).
Feb 13 20:47:21.549159 sshd[5944]: Accepted publickey for core from 10.200.16.10 port 38094 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:21.550616 sshd[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:21.554658 systemd-logind[1698]: New session 12 of user core.
Feb 13 20:47:21.560665 systemd[1]: Started session-12.scope - Session 12 of User core.
Feb 13 20:47:21.958551 sshd[5944]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:21.961225 systemd[1]: sshd@9-10.200.20.20:22-10.200.16.10:38094.service: Deactivated successfully.
Feb 13 20:47:21.964133 systemd[1]: session-12.scope: Deactivated successfully.
Feb 13 20:47:21.966246 systemd-logind[1698]: Session 12 logged out. Waiting for processes to exit.
Feb 13 20:47:21.967852 systemd-logind[1698]: Removed session 12.
Feb 13 20:47:22.043665 systemd[1]: Started sshd@10-10.200.20.20:22-10.200.16.10:38104.service - OpenSSH per-connection server daemon (10.200.16.10:38104).
Feb 13 20:47:22.482354 sshd[5958]: Accepted publickey for core from 10.200.16.10 port 38104 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:22.483724 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:22.488398 systemd-logind[1698]: New session 13 of user core.
Feb 13 20:47:22.496524 systemd[1]: Started session-13.scope - Session 13 of User core.
Feb 13 20:47:22.924725 sshd[5958]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:22.931598 systemd[1]: sshd@10-10.200.20.20:22-10.200.16.10:38104.service: Deactivated successfully.
Feb 13 20:47:22.933988 systemd[1]: session-13.scope: Deactivated successfully.
Feb 13 20:47:22.935173 systemd-logind[1698]: Session 13 logged out. Waiting for processes to exit.
Feb 13 20:47:22.936604 systemd-logind[1698]: Removed session 13.
Feb 13 20:47:23.011549 systemd[1]: Started sshd@11-10.200.20.20:22-10.200.16.10:38106.service - OpenSSH per-connection server daemon (10.200.16.10:38106).
Feb 13 20:47:23.456152 sshd[5969]: Accepted publickey for core from 10.200.16.10 port 38106 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:23.456762 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:23.461481 systemd-logind[1698]: New session 14 of user core.
Feb 13 20:47:23.464554 systemd[1]: Started session-14.scope - Session 14 of User core.
Feb 13 20:47:23.848588 sshd[5969]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:23.852578 systemd[1]: sshd@11-10.200.20.20:22-10.200.16.10:38106.service: Deactivated successfully.
Feb 13 20:47:23.856047 systemd[1]: session-14.scope: Deactivated successfully.
Feb 13 20:47:23.857431 systemd-logind[1698]: Session 14 logged out. Waiting for processes to exit.
Feb 13 20:47:23.859241 systemd-logind[1698]: Removed session 14.
Feb 13 20:47:28.945607 systemd[1]: Started sshd@12-10.200.20.20:22-10.200.16.10:51654.service - OpenSSH per-connection server daemon (10.200.16.10:51654).
Feb 13 20:47:29.426843 sshd[5991]: Accepted publickey for core from 10.200.16.10 port 51654 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:29.428210 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:29.432493 systemd-logind[1698]: New session 15 of user core.
Feb 13 20:47:29.440509 systemd[1]: Started session-15.scope - Session 15 of User core.
Feb 13 20:47:29.848797 sshd[5991]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:29.852806 systemd[1]: sshd@12-10.200.20.20:22-10.200.16.10:51654.service: Deactivated successfully.
Feb 13 20:47:29.854620 systemd[1]: session-15.scope: Deactivated successfully.
Feb 13 20:47:29.855549 systemd-logind[1698]: Session 15 logged out. Waiting for processes to exit.
Feb 13 20:47:29.856673 systemd-logind[1698]: Removed session 15.
Feb 13 20:47:34.945079 systemd[1]: Started sshd@13-10.200.20.20:22-10.200.16.10:51662.service - OpenSSH per-connection server daemon (10.200.16.10:51662).
Feb 13 20:47:35.433646 sshd[6047]: Accepted publickey for core from 10.200.16.10 port 51662 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:35.435040 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:35.440395 systemd-logind[1698]: New session 16 of user core.
Feb 13 20:47:35.451530 systemd[1]: Started session-16.scope - Session 16 of User core.
Feb 13 20:47:35.873730 sshd[6047]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:35.876465 systemd[1]: sshd@13-10.200.20.20:22-10.200.16.10:51662.service: Deactivated successfully.
Feb 13 20:47:35.878474 systemd[1]: session-16.scope: Deactivated successfully.
Feb 13 20:47:35.880370 systemd-logind[1698]: Session 16 logged out. Waiting for processes to exit.
Feb 13 20:47:35.881679 systemd-logind[1698]: Removed session 16.
Feb 13 20:47:40.959034 systemd[1]: Started sshd@14-10.200.20.20:22-10.200.16.10:60218.service - OpenSSH per-connection server daemon (10.200.16.10:60218).
Feb 13 20:47:41.405348 sshd[6080]: Accepted publickey for core from 10.200.16.10 port 60218 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:41.407145 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:41.411562 systemd-logind[1698]: New session 17 of user core.
Feb 13 20:47:41.419560 systemd[1]: Started session-17.scope - Session 17 of User core.
Feb 13 20:47:41.795904 sshd[6080]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:41.799725 systemd-logind[1698]: Session 17 logged out. Waiting for processes to exit.
Feb 13 20:47:41.800294 systemd[1]: sshd@14-10.200.20.20:22-10.200.16.10:60218.service: Deactivated successfully.
Feb 13 20:47:41.802696 systemd[1]: session-17.scope: Deactivated successfully.
Feb 13 20:47:41.803850 systemd-logind[1698]: Removed session 17.
Feb 13 20:47:41.879584 systemd[1]: Started sshd@15-10.200.20.20:22-10.200.16.10:60226.service - OpenSSH per-connection server daemon (10.200.16.10:60226).
Feb 13 20:47:42.329208 sshd[6093]: Accepted publickey for core from 10.200.16.10 port 60226 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:42.331232 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:42.336948 systemd-logind[1698]: New session 18 of user core.
Feb 13 20:47:42.342700 systemd[1]: Started session-18.scope - Session 18 of User core.
Feb 13 20:47:42.823492 sshd[6093]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:42.827380 systemd[1]: sshd@15-10.200.20.20:22-10.200.16.10:60226.service: Deactivated successfully.
Feb 13 20:47:42.829272 systemd[1]: session-18.scope: Deactivated successfully.
Feb 13 20:47:42.830674 systemd-logind[1698]: Session 18 logged out. Waiting for processes to exit.
Feb 13 20:47:42.832087 systemd-logind[1698]: Removed session 18.
Feb 13 20:47:42.913411 systemd[1]: Started sshd@16-10.200.20.20:22-10.200.16.10:60230.service - OpenSSH per-connection server daemon (10.200.16.10:60230).
Feb 13 20:47:43.395041 sshd[6104]: Accepted publickey for core from 10.200.16.10 port 60230 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:43.396477 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:43.401705 systemd-logind[1698]: New session 19 of user core.
Feb 13 20:47:43.409676 systemd[1]: Started session-19.scope - Session 19 of User core.
Feb 13 20:47:45.333311 sshd[6104]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:45.336222 systemd[1]: sshd@16-10.200.20.20:22-10.200.16.10:60230.service: Deactivated successfully.
Feb 13 20:47:45.339101 systemd[1]: session-19.scope: Deactivated successfully.
Feb 13 20:47:45.341283 systemd-logind[1698]: Session 19 logged out. Waiting for processes to exit.
Feb 13 20:47:45.342629 systemd-logind[1698]: Removed session 19.
Feb 13 20:47:45.430746 systemd[1]: Started sshd@17-10.200.20.20:22-10.200.16.10:60236.service - OpenSSH per-connection server daemon (10.200.16.10:60236).
Feb 13 20:47:45.912528 sshd[6126]: Accepted publickey for core from 10.200.16.10 port 60236 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:45.914439 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:45.919212 systemd-logind[1698]: New session 20 of user core.
Feb 13 20:47:45.929497 systemd[1]: Started session-20.scope - Session 20 of User core.
Feb 13 20:47:46.434936 sshd[6126]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:46.437658 systemd[1]: sshd@17-10.200.20.20:22-10.200.16.10:60236.service: Deactivated successfully.
Feb 13 20:47:46.439721 systemd[1]: session-20.scope: Deactivated successfully.
Feb 13 20:47:46.441787 systemd-logind[1698]: Session 20 logged out. Waiting for processes to exit.
Feb 13 20:47:46.442988 systemd-logind[1698]: Removed session 20.
Feb 13 20:47:46.521678 systemd[1]: Started sshd@18-10.200.20.20:22-10.200.16.10:60250.service - OpenSSH per-connection server daemon (10.200.16.10:60250).
Feb 13 20:47:47.001535 sshd[6137]: Accepted publickey for core from 10.200.16.10 port 60250 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:47.002873 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:47.007264 systemd-logind[1698]: New session 21 of user core.
Feb 13 20:47:47.011498 systemd[1]: Started session-21.scope - Session 21 of User core.
Feb 13 20:47:47.402590 sshd[6137]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:47.406038 systemd-logind[1698]: Session 21 logged out. Waiting for processes to exit.
Feb 13 20:47:47.406805 systemd[1]: sshd@18-10.200.20.20:22-10.200.16.10:60250.service: Deactivated successfully.
Feb 13 20:47:47.410055 systemd[1]: session-21.scope: Deactivated successfully.
Feb 13 20:47:47.412091 systemd-logind[1698]: Removed session 21.
Feb 13 20:47:52.487631 systemd[1]: Started sshd@19-10.200.20.20:22-10.200.16.10:39436.service - OpenSSH per-connection server daemon (10.200.16.10:39436).
Feb 13 20:47:52.925748 sshd[6154]: Accepted publickey for core from 10.200.16.10 port 39436 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:52.927107 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:52.931589 systemd-logind[1698]: New session 22 of user core.
Feb 13 20:47:52.938540 systemd[1]: Started session-22.scope - Session 22 of User core.
Feb 13 20:47:53.324868 sshd[6154]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:53.328284 systemd[1]: sshd@19-10.200.20.20:22-10.200.16.10:39436.service: Deactivated successfully.
Feb 13 20:47:53.331568 systemd[1]: session-22.scope: Deactivated successfully.
Feb 13 20:47:53.332565 systemd-logind[1698]: Session 22 logged out. Waiting for processes to exit.
Feb 13 20:47:53.333942 systemd-logind[1698]: Removed session 22.
Feb 13 20:47:58.415939 systemd[1]: Started sshd@20-10.200.20.20:22-10.200.16.10:39442.service - OpenSSH per-connection server daemon (10.200.16.10:39442).
Feb 13 20:47:58.904466 sshd[6167]: Accepted publickey for core from 10.200.16.10 port 39442 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:47:58.905983 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:47:58.910455 systemd-logind[1698]: New session 23 of user core.
Feb 13 20:47:58.919542 systemd[1]: Started session-23.scope - Session 23 of User core.
Feb 13 20:47:59.305964 sshd[6167]: pam_unix(sshd:session): session closed for user core
Feb 13 20:47:59.309705 systemd[1]: sshd@20-10.200.20.20:22-10.200.16.10:39442.service: Deactivated successfully.
Feb 13 20:47:59.311818 systemd[1]: session-23.scope: Deactivated successfully.
Feb 13 20:47:59.312629 systemd-logind[1698]: Session 23 logged out. Waiting for processes to exit.
Feb 13 20:47:59.313756 systemd-logind[1698]: Removed session 23.
Feb 13 20:48:04.389632 systemd[1]: Started sshd@21-10.200.20.20:22-10.200.16.10:59726.service - OpenSSH per-connection server daemon (10.200.16.10:59726).
Feb 13 20:48:04.829641 sshd[6201]: Accepted publickey for core from 10.200.16.10 port 59726 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:48:04.831502 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:48:04.834173 systemd[1]: run-containerd-runc-k8s.io-12e4c9bed157b9d4fbad4c12bfae51b314a0c0918824e706e54193da81dfdc5a-runc.SOD0Aa.mount: Deactivated successfully.
Feb 13 20:48:04.843703 systemd-logind[1698]: New session 24 of user core.
Feb 13 20:48:04.848570 systemd[1]: Started session-24.scope - Session 24 of User core.
Feb 13 20:48:05.222583 sshd[6201]: pam_unix(sshd:session): session closed for user core
Feb 13 20:48:05.225881 systemd[1]: sshd@21-10.200.20.20:22-10.200.16.10:59726.service: Deactivated successfully.
Feb 13 20:48:05.228321 systemd[1]: session-24.scope: Deactivated successfully.
Feb 13 20:48:05.230862 systemd-logind[1698]: Session 24 logged out. Waiting for processes to exit.
Feb 13 20:48:05.232227 systemd-logind[1698]: Removed session 24.
Feb 13 20:48:10.319213 systemd[1]: Started sshd@22-10.200.20.20:22-10.200.16.10:60182.service - OpenSSH per-connection server daemon (10.200.16.10:60182).
Feb 13 20:48:10.811126 sshd[6236]: Accepted publickey for core from 10.200.16.10 port 60182 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:48:10.813128 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:48:10.820513 systemd-logind[1698]: New session 25 of user core.
Feb 13 20:48:10.828577 systemd[1]: Started session-25.scope - Session 25 of User core.
Feb 13 20:48:11.239317 sshd[6236]: pam_unix(sshd:session): session closed for user core
Feb 13 20:48:11.245047 systemd[1]: sshd@22-10.200.20.20:22-10.200.16.10:60182.service: Deactivated successfully.
Feb 13 20:48:11.247978 systemd[1]: session-25.scope: Deactivated successfully.
Feb 13 20:48:11.248901 systemd-logind[1698]: Session 25 logged out. Waiting for processes to exit.
Feb 13 20:48:11.249856 systemd-logind[1698]: Removed session 25.
Feb 13 20:48:16.326310 systemd[1]: Started sshd@23-10.200.20.20:22-10.200.16.10:60188.service - OpenSSH per-connection server daemon (10.200.16.10:60188).
Feb 13 20:48:16.815121 sshd[6251]: Accepted publickey for core from 10.200.16.10 port 60188 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:48:16.816309 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:48:16.821457 systemd-logind[1698]: New session 26 of user core.
Feb 13 20:48:16.824493 systemd[1]: Started session-26.scope - Session 26 of User core.
Feb 13 20:48:17.247144 sshd[6251]: pam_unix(sshd:session): session closed for user core
Feb 13 20:48:17.249990 systemd[1]: sshd@23-10.200.20.20:22-10.200.16.10:60188.service: Deactivated successfully.
Feb 13 20:48:17.252474 systemd[1]: session-26.scope: Deactivated successfully.
Feb 13 20:48:17.254714 systemd-logind[1698]: Session 26 logged out. Waiting for processes to exit.
Feb 13 20:48:17.255654 systemd-logind[1698]: Removed session 26.
Feb 13 20:48:22.336601 systemd[1]: Started sshd@24-10.200.20.20:22-10.200.16.10:43562.service - OpenSSH per-connection server daemon (10.200.16.10:43562).
Feb 13 20:48:22.782192 sshd[6266]: Accepted publickey for core from 10.200.16.10 port 43562 ssh2: RSA SHA256:QXChnj2nbwIBu4VqYzldCszdg8/VhD8OxIaQoV1ZGl8
Feb 13 20:48:22.783499 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 20:48:22.788007 systemd-logind[1698]: New session 27 of user core.
Feb 13 20:48:22.792497 systemd[1]: Started session-27.scope - Session 27 of User core.
Feb 13 20:48:23.166975 sshd[6266]: pam_unix(sshd:session): session closed for user core
Feb 13 20:48:23.171249 systemd[1]: sshd@24-10.200.20.20:22-10.200.16.10:43562.service: Deactivated successfully.
Feb 13 20:48:23.173297 systemd[1]: session-27.scope: Deactivated successfully.
Feb 13 20:48:23.174277 systemd-logind[1698]: Session 27 logged out. Waiting for processes to exit.
Feb 13 20:48:23.175229 systemd-logind[1698]: Removed session 27.